Reconstructing Soft Robotic Touch via In-Finger Vision


Ning Guo, Xudong Han, Shuqiao Zhong, Zhiyuan Zhou, Jian Lin, Fang Wan, Chaoyang Song: Reconstructing Soft Robotic Touch via In-Finger Vision. In: Advanced Intelligent Systems, vol. 6, iss. October, no. 10, pp. 2400022, 2024.

Abstract

Incorporating authentic tactile interactions into virtual environments presents a notable challenge for the emerging development of soft robotic metamaterials. In this study, a vision-based approach is introduced to learning proprioceptive interactions by simultaneously reconstructing the shape and touch of a soft robotic metamaterial (SRM) during physical engagements. The SRM design is optimized to the size of a finger with enhanced adaptability in 3D interactions while incorporating a see-through viewing field inside, which can be visually captured by a miniature camera underneath to provide a rich set of image features for touch digitization. Employing constrained geometric optimization, the proprioceptive process with aggregated multi-handles is modeled. This approach facilitates real-time, precise, and realistic estimations of the finger's mesh deformation within a virtual environment. Herein, a data-driven learning model is also proposed to estimate touch positions, achieving reliable results with impressive R2 scores of 0.9681, 0.9415, and 0.9541 along the x, y, and z axes. Furthermore, the robust performance of the proposed methods in touch-based human–cybernetic interfaces and human–robot collaborative grasping is demonstrated. In this study, the door is opened to future applications in touch-based digital twin interactions through vision-based soft proprioception.

BibTeX (Download)

@article{Guo2024ReconstructingSoft,
title = {Reconstructing Soft Robotic Touch via In-Finger Vision},
author = {Ning Guo and Xudong Han and Shuqiao Zhong and Zhiyuan Zhou and Jian Lin and Fang Wan and Chaoyang Song},
doi = {10.1002/aisy.202400022},
year  = {2024},
date = {2024-10-01},
urldate = {2024-10-01},
journal = {Advanced Intelligent Systems},
volume = {6},
number = {10},
issue = {October},
pages = {2400022},
abstract = {Incorporating authentic tactile interactions into virtual environments presents a notable challenge for the emerging development of soft robotic metamaterials. In this study, a vision-based approach is introduced to learning proprioceptive interactions by simultaneously reconstructing the shape and touch of a soft robotic metamaterial (SRM) during physical engagements. The SRM design is optimized to the size of a finger with enhanced adaptability in 3D interactions while incorporating a see-through viewing field inside, which can be visually captured by a miniature camera underneath to provide a rich set of image features for touch digitization. Employing constrained geometric optimization, the proprioceptive process with aggregated multi-handles is modeled. This approach facilitates real-time, precise, and realistic estimations of the finger's mesh deformation within a virtual environment. Herein, a data-driven learning model is also proposed to estimate touch positions, achieving reliable results with impressive R2 scores of 0.9681, 0.9415, and 0.9541 along the x, y, and z axes. Furthermore, the robust performance of the proposed methods in touch-based human–cybernetic interfaces and human–robot collaborative grasping is demonstrated. In this study, the door is opened to future applications in touch-based digital twin interactions through vision-based soft proprioception.},
keywords = {Adv. Intell. Syst. (AIS), Corresponding Author, Front Cover, JCR Q1},
pubstate = {published},
tppubtype = {article}
}