Autoencoding a Soft Touch to Learn Grasping from On-land to Underwater


Ning Guo, Xudong Han, Xiaobo Liu, Shuqiao Zhong, Zhiyuan Zhou, Jian Lin, Jiansheng Dai, Fang Wan, Chaoyang Song: Autoencoding a Soft Touch to Learn Grasping from On-land to Underwater. In: Advanced Intelligent Systems, vol. 6, iss. January, no. 1, pp. 2300382, 2024.


Abstract

Robots play a critical role as the physical agent of human operators in exploring the ocean. However, it remains challenging to grasp objects reliably while fully submerging under a highly pressurized aquatic environment with little visible light, mainly due to the fluidic interference on the tactile mechanics between the finger and object surfaces. This study investigates the transferability of grasping knowledge from on-land to underwater via a vision-based soft robotic finger that learns 6D forces and torques (FT) using a supervised variational autoencoder (SVAE). A high-framerate camera captures the whole-body deformations while a soft robotic finger interacts with physical objects on-land and underwater. Results show that the trained SVAE model learns a series of latent representations of the soft mechanics transferable from land to water, presenting a superior adaptation to the changing environments against commercial FT sensors. Soft, delicate, and reactive grasping enabled by tactile intelligence enhances the gripper's underwater interaction with improved reliability and robustness at a much-reduced cost, paving the path for learning-based intelligent grasping to support fundamental scientific discoveries in environmental and ocean research.

BibTeX (Download)

@article{Guo2024AutoencodingA,
title = {Autoencoding a Soft Touch to Learn Grasping from On-land to Underwater},
author = {Ning Guo and Xudong Han and Xiaobo Liu and Shuqiao Zhong and Zhiyuan Zhou and Jian Lin and Jiansheng Dai and Fang Wan and Chaoyang Song},
doi = {10.1002/aisy.202300382},
year  = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
journal = {Advanced Intelligent Systems},
volume = {6},
number = {1},
issue = {January},
pages = {2300382},
abstract = {Robots play a critical role as the physical agent of human operators in exploring the ocean. However, it remains challenging to grasp objects reliably while fully submerging under a highly pressurized aquatic environment with little visible light, mainly due to the fluidic interference on the tactile mechanics between the finger and object surfaces. This study investigates the transferability of grasping knowledge from on-land to underwater via a vision-based soft robotic finger that learns 6D forces and torques (FT) using a supervised variational autoencoder (SVAE). A high-framerate camera captures the whole-body deformations while a soft robotic finger interacts with physical objects on-land and underwater. Results show that the trained SVAE model learns a series of latent representations of the soft mechanics transferable from land to water, presenting a superior adaptation to the changing environments against commercial FT sensors. Soft, delicate, and reactive grasping enabled by tactile intelligence enhances the gripper's underwater interaction with improved reliability and robustness at a much-reduced cost, paving the path for learning-based intelligent grasping to support fundamental scientific discoveries in environmental and ocean research.},
keywords = {Adv. Intell. Syst. (AIS), Corresponding Author, Front Cover, JCR Q1},
pubstate = {published},
tppubtype = {article}
}