Summary of Research at BionicDL

(Updated on Feb 05, 2024, by Dr. Song Chaoyang)

My research activities at SUSTech could be summarized by One Theme, Two Methods, and Three Research Questions. Bionic Design and Learning[1] (BionicDL) is the theme of research at my lab, aiming at developing novel robots with cross-domain capabilities in limb mechanisms, operational scenarios, and interactive skills, preferably within a single design. I adopted an interdisciplinary approach by combining two methods in research, including the bio-inspired mechanism design method with overconstrained geometry for modern robots and adopting robot learning methods for computational intelligence in interaction. The three Research Questions we are of particular interest are as follows:

  • RQ1: What is the relationship between the fingers’ object manipulation and the legs’ terrain locomotion skills, considering the evolutionary biology of the human, and can we find, prove, and utilize the skills learned from either or both for the other in robot design?
  • RQ2: How can we design a contact-based interface for robotic limbs that is directly transferrable between on-land and underwater scenarios, enabling rich data exchange between human intentions and an unstructured environment with robots as the physically embodied agents?
  • RQ3: How can we design reconfigurable machine intelligence compatible with interactions with human and robot agents in on-land and underwater scenarios?

During my tenure-track appointment as an Assistant Professor at SUSTech, together with my team at the BionicDL lab, I have been at the forefront of pioneering research in the field of robotics, focusing on three primary areas that have significantly contributed to the advancement of robotic design and functionality.

  1. Overconstrained Design & Learning of Robotic Limbs:

I have led the way in leveraging overconstrained linkages for the innovative design of robotic limbs. My research has theoretically validated the application of all overconstrained 4R, 5R, and 6R linkages in crafting novel models of robotic limbs and successfully developed the inaugural prototype of a full-sized quadruped robot, employing Bennett linkages as the core limb mechanisms. I introduced a groundbreaking computational design framework to optimize overconstrained linkages for robotic legs, ensuring omni-directional locomotion. Through theoretical and practical applications, it was demonstrated that Bennett limbs designed with overconstrained linkages surpass traditional planar limb mechanisms regarding energy efficiency and cost of transport. Furthermore, I have applied reinforcement learning to refine the locomotion strategies of overconstrained limbs, showcasing their evolutionary advantage in legged robotics. Additionally, my work proposed a novel co-training method in reinforcement learning, establishing the transferability of skills between multi-fingered manipulation and legged locomotion, thus laying the groundwork for a unifying theory in overconstrained robotics for advanced motion control.

  • Omni-Directional Soft Adaptation & Proprioception:

I have innovated Soft Polyhedral Networks capable of omnidirectional adaptation and introduced a novel in-finger vision design for exceptional 6D force and torque sensing, seamlessly applicable on land and underwater. This technology facilitates the creation of soft, adaptive, and sensitive robotic fingertips at a reduced cost. My team and I have explored various learning-based methods, including rigid-soft interactive and proprioceptive learning. These have revealed unique insights into the static and dynamic viscoelastic behaviors crucial for soft and tactile robotic grasping. This research extends the scope of overconstrained robotics into the burgeoning field of soft robotics.

  • Reconfigurable Design of the Underwater SuperLimb:

I proposed the pioneering concept of a Supernumerary Robotic Limb as a novel wearable device designed to assist divers, with proven efficacy through field tests. This device can be reconfigured for four distinct types of human-robot interactions within a single design framework. Additionally, I have developed an underwater intention recognition method that integrates throat vibration and head motion data, enhancing interaction with advanced robotic systems both underwater and on land and setting the stage for sophisticated amphibious robotic applications.

Reflecting on my research journey, I acknowledge the limitation in achieving a cohesive integration among these three pivotal areas of my work. Moving forward into my tenured appointment as an Associate Professor, I am committed to deepening the theoretical foundations and technological advancements within Overconstrained Robotics. I aim to forge a unified theory that bridges across limbs, scenarios, and skills within robotic sciences and systems. An exciting future project, SOPHIE – the SUSTech Lobber Robot for Learning in Amphibious Environments, epitomizes this vision. It represents a harmonious blend of novel robotic system design theories, integrating rigid-soft interactions and advanced learning methods for sophisticated human-robot interactions, marking my long-term goal in this vibrant field of study. The following two pages summarize publications, patents, and funding at the BionicDL lab since I joined SUSTech in June 2018. Please refer to the attached resume, which includes further detailed information on all the items listed.

Together with my team at the BionicDL lab, we systematically organized our research with a dedicated effort for each of the research questions raised above through the following topics, which will be explained later with further details.

  1. Overconstrained Design & Learning of Robotic Limbs
  2. Omni-Directional Soft Adaptation & Proprioception
  3. Reconfigurable Design of the Underwater SuperLimb

This summary will end with concluding remarks followed by an exciting outlook to the future research at the BionicDL towards the Bionic Design and Learning of SOPHIE, the SUSTech Lobster Robot for Learning in Amphibious Environment.

1.1.  Grants

Awarded with a total of 14.2 million RMB and 102,300 AUD funding so far.

Since joining SUSTech, awarded 13.9 million RMB in funding as PI or Co-PI from various agencies.

  • (8.35M) List of Grants awarded as PI:
    • (1.87M) National Level: (280K) NSFC / (1.59M) ***
    • (3.90M) Local Level: (500K+400K+3M) Shenzhen
    • (2.10M) Industrial Level: (2M) ZEIZET / (50K) Ningquan / (50K) Tencent
    • (0.48M) University Level: (100K+80K+300K) SUSTech
  • (5.55M) List of Grants awarded as Co-PI
    • (1.15M) National Level: (1.15M) NSFC
    • (4.40M) Local Level: (900K+2M+1.5M) Shenzhen

Please refer to the resume for a full list of funding details.

1.2.  Patents

Since joining SUSTech, filed 30 Patents in total, with further details summarized as the following.

  •   (1) Patents Granted in the United States
  • (13) Patents Granted in China, including 10 Invention + 3 Utility
  •   (7) Patents Granted with Priority Claims WIPO (PCT)
  •   (9) Patents Pending in China

Please refer to the resume for a full list of patent details.

1.3.  Publications In Total

Published 62 papers so far, including 33 Journal papers and 29 Conference papers. Google Scholar reports a total of 946 citations with h-index = 18 (as of Feb 06, 2024), of which 867 (~92%) citations were reported since 2018. There are also 8 submitted papers currently under review, 3 preprints on ArXiv, and 3 working papers to be submitted in February 2024.

(8) Under review, all as the Corresponding Author:
(1) The International Journal of Robotics ResearchJCR Q1 | JCI Q1: (1st Review Received & Replied)
(1) IEEE Transactions on RoboticsJCR Q1 | JCI Q1: (SI on Tactile Robotics)
(1) Soft RoboticsJCR Q1 | JCI Q1
(1) International Journal of Mechanical SciencesJCR Q1 | JCI Q1
(2) RSS2024
(2) ICARM2024
(3) Working papers to be submitted before the end of February 2024:
(1) Journal of Dentistry
(2) ReMAR2024
(3) ArXiv preprints
(3) ArXiv

1.4.  Published Papers Since Joining SUSTech

Since joining SUSTech in June 2018, published 24 Journal papers and 18 Conference papers.

(15) Journal papers published as the First or Corresponding Author:
(1) Advanced Intelligent Systems (Front Cover)JCR Q1 | JCI Q1
(1) Advanced Intelligent Systems (Back Cover & Editor’s Choice)JCR Q1 | JCI Q1
(1) Journal of Computational Design and Engineering (Editor’s Choice)JCR Q1 | JCI Q1
(1) Mechanism and Machine TheoryJCR Q1 | JCI Q1
(1) Materials & DesignJCR Q1 | JCI Q1
(2) BiomimeticsJCR Q1 | JCI Q1
(4) IEEE Robotics and Automation LettersJCR Q2 | JCI Q1
(1) IEEE Transactions on Engineering ManagementJCR Q2 | JCI Q1
(3) Frontiers in Robotics and AI JCI Q3
(9) Journal papers published as a Co-Author:
(1) Science RoboticsJCR Q1 | JCI Q1: (222 citations)
(2) IEEE Transactions on RoboticsJCR Q1 | JCI Q1: (61+4 citations)
(1) Soft RoboticsJCR Q1 | JCI Q1: (71 citations)
(1) IEEE/ASME Transactions on MechatronicsJCR Q1 | JCI Q1: (35 citations)
(2) IEEE Robotics and Automation LettersJCR Q2 | JCI Q1(13 citations)
(1) Journal of DentistryJCR Q1 | JCI Q1
(1) RoboticaJCR Q3 | JCI Q3
(13) Conference papers published as the First or Corresponding Author:
(1) CASE2023 (Best Healthcare Automation Paper Award)IEEE RAS | CAA B-Class
(1) ICARM2023 (Best Paper Finalist Award)IEEE RAS | CAA A-Class
(1) ICDL2023IEEE RAS
(1) CoRL2021 (1st Published with SUSTech Affiliation)PMLR
(3) ICRA2021, ICRA2020IEEE RAS | CAA A-Class
(5) RoboSoft2021, RoboSoft2020IEEE RAS | CAA C-Class
(1) AIM2020IEEE RAS | CAA B-Class
(5) List of Conferences published as a Co-Author:
(1) ICRA2023IEEE RAS | CAA A-Class
(1) IROS2023IEEE RAS | CAA A-Class
(1) DETC/CIE2022ASME
(2) RCAR2019, RCAR2018IEEE RAS | CAA C-Class
IEEE RAS: IEEE Robotics and Automation SocietyJCR: Journal Citation Reports – Clarivate
IEEE: Institute of Electrical & Electronics EngineersJCI: Journal Citation Indicator – Clarivate
ASME: American Society of Mechanical EngineersPMLR: Proceedings of Machine Learning Research
CAA A/B/C-Class: A/B/C-Class Conferences Indexed by the Chinese Association of Automation

Patents

Since joining SUSTech, filed 30 Patents in total, with further details summarized as the following.

  •   (1) Patents Granted in the United States
  • (13) Patents Granted in China
  •   (7) Patents Granted with Priority Claims WIPO (PCT)
  • (9) Patents Pending in China

Grants

Awarded with a total of 15.414 million RMB and 102.3K AUD funding so far.

Since joining SUSTech, awarded with 15.114 million RMB funding as PI, Co-PI, or Participant from various agencies.

  • (8.35M) List of Grants awarded as PI:
    • (1.87M) National Level: (280K) NSFC / (1.59M) NP
    • (3.90M) Provincial & Regional Level: (500K+400K+3M) Shenzhen
    • (2.10M) Industrial Level: (2M) ZEIZET / (50K) NQ / (50K) Tencent
    • (0.48M) University Level: (100K+80K+300K) SUSTech
  • (6.764M) List of Grants awarded as Co-PI or Participant
    • (2.364M) National Level: (1.15M) NSFC / (1.214M) MSTC
    • (4.40M) Provincial & Regional Level: (900K+2M+1.5M) Shenzhen

Overconstrained Design & Learning of Robotic Limbs

I took a design-driven approach to address the 1st Research Question where a learning-based method is involved for a computational analysis for research insights. It is widely understood in biomechanics and robotics to formulate the robophysical model of human limbs, including arms, legs, and fingers, as a serial chain with multiple links connected by revolute joints as an open loop. However, engineering practices, especially when considering the electric motors’ limited energy density and control bandwidth, usually adopt a closed-loop mechanism to offset the motor’s weight closer to the body for enhanced dynamic performance and ease of design integration. Planar linkages gradually become a dominant design with demonstrated success in commercialization, including examples such as the Cheetah series by MIT, the Spot series by Boston Dynamics, and the Go series by Unitree, with a few exceptions such as the ANYmal series by ETH still using open serial limb mechanism thanks to their custom motor design. So, a natural question emerges as to whether planar four-bar linkages and their variations genuinely capture the mechanism superiority from theoretical and applicable perspectives for the design of robotic limbs and whether they can leverage insights obtained from design to discover transferrable skills between manipulation and locomotion for a further understanding of robot design. I draw insights from three sources of inspiration:

  • Biological Inspiration from Lobster Limbs, which are found to be kinematically identical for all limbs but evolved as arms and legs in shape and functionality and feature a reduced degree of freedom in claws and limbs without the loss of capabilities to be recognized as the Noble Knight of the Ocean[2] long before human starts walking upright.
  • Theoretical Kinematics of Overconstrained Geometry, which is a set of unique constraints in the kinematic design of closed-loop linkages that enables a mechanism to be movable, is widely researched due to their kinematic behaviors with mathematical tools for modern robotics in the 3D space but under-explored among the contemporary robotics.
  • Data-driven Reinforcement Learning is an emerging field of research that adopts a data-centric approach with numerical optimization and computational intelligence with proven success in computer science and robotic engineering.

During my doctoral research on the Kinematics of Overconstrained Linkages and the Design of Reconfigurable Mechanisms[3], a critical approach that I proposed was the concept of dimensional reduction to find the underlying relationship between linkages of complex geometry. Following this concept, the Bennett linkage could be viewed as a generic description of the four-bar linkages with twisted joint axes movable in the 3D space, which can be degenerated as a planar linkage when requiring all joint axes to be parallel or a spherical one when all link lengths become zero. Following this understanding, a research gap emerges immediately in adopting overconstrained linkages for designing modern robotic limbs with kinematic superiority in spatial motion.

Biological Inspiration from Lobster Limbs

The first work we published was an ICRA2021 paper[4] on designing a simple overconstrained leg using Bennett linkage as the building block with the electronics borrowed from the Stanford Doggo robot. This paper proposed an overconstrained leg module design with co-axial actuation. We proved its superior performance in walking sideways or turning on the spot even with only two actuators on each leg, which is difficult to achieve using common planar linkages such as Stanford Doggo or Ghost Robotics’ Minitaur.

Shihao Feng, Yuping Gu, Weijie Guo, Yuqin Guo, Fang Wan, Jia Pan, and Chaoyang Song* (2021). “An Overconstrained Robotic Leg with Coaxial Quasi-direct Drives for Omni-directional Ground Mobility.” IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May-5 June 2021, pp. 11477-11484. doi: https://doi.org/10.1109/ICRA48506.2021.9561829

Later, in 2022, we published a journal article on Mechanism and Machine Theory[5] with a complete analysis of the theoretical kinematics of overconstrained robotic limbs with 4R, 5R, or even 6R design and proved that the class of overconstrained linkages are generally applicable to robotic limbs in theory and application. In this work, we also proposed a 3D printable design of the twisted links actuated by servo motors that is compatible with the engineering integration of this class of overconstrained robotic limbs in legged robots. Also, we found that legged robots designed with overconstrained geometry exhibit state-of-the-art performance with the minimum turning radius using the minimalist degree of freedom on each limb or the whole robot.

Yuping Gu, Shihao Feng, Yuqin Guo, Fang Wan, Jian S. Dai, Jia Pan, and Chaoyang Song* (2022). “Overconstrained Coaxial Design of Robotic Legs with Omni-directional Locomotion.” Mechanism and Machine Theory, 176:105018. doi: https://doi.org/10.1016/j.mechmachtheory.2022.105018

Theoretical Kinematics of Overconstrained Geometry

In 2023, we published a journal article in the Journal of Computational Design and Engineering[6] regarding a computational design framework to numerically optimize the parametric design of the overconstrained limb with quadrupedal gait planning. Using this method, we systematically compared leg mechanisms designed with planar four-bar, overconstrained four-bar, spherical four-bar with or without foot tip in terms of their energy efficiency in quadrupedal locomotion. Results show that the overconstrained limb design exhibited the best energy efficiency in walking sideways and turning on-the-spot tasks, with only a slight reduction in forward walking compared to the standard choice of planar limb. All results are validated based on a robot chassis and triple-actuation module of identical design, only different in the leg mechanisms.

Yuping Gu, Ziqian Wang, Shihao Feng, Haoran Sun, Haibo Lu, Jia Pan*, Fang Wan*, and Chaoyang Song* (2023). “Computational Design Towards Energy Efficient Optimization in Overconstrained Robotic Limbs.” Journal of Computational Design and Engineering, 10: 1-16. doi: https://doi.org/10.1093/jcde/qwad083

We have recently submitted a manuscript to the IEEE Transactions of Robotics[7] by designing and developing a new overconstrained quadruped prototype using quasi-direct drives (QDD) commonly used in modern legged robots. This upgrade in actuation enables us to explore the dynamic performance of the overconstrained limbs using Motion Predictive Control (MPC) algorithms for more challenging tasks such as omni-directional walking on uneven surfaces, climbing a slope or stairs, or under perturbations. Further analysis based on the Cost of Transport also shows the systematic superiority of the overconstrained legs against the planar legs in locomotion.

Ronghan Xu#, Jiayi Yin#, Shihao Feng#, Bangchao Huang, Haoran Sun, Jia Pan, Fang Wan*, and Chaoyang Song* (2023). “Overconstrained Robotic Limb with Energy-Efficient, Omni-directional Locomotion.” arXiv, doi: https://doi.org/10.48550/arXiv.2310.09824

Data-driven Reinforcement Learning

Upon establishing an understanding of the overconstrained robotic limb based on the above works, we formally investigated the 1st Research Question of our lab. In 2023, we published a journal article on Biomimetics[8] by reconfiguring the same overconstrained robotic limb as a gripper for manipulation and a quadruped for locomotion. Theoretically, we modeled the robotic limbs as a Graph Neural Network and proved the skill transfer between these two tasks using reinforcement learning. To further optimize the training efficiency, we used a reinforcement learning strategy using Multilayer Perceptron and achieved a much-improved success rate in skill transfer. This work expands the knowledge frontiers on loco-manipulation transferability with learning-based evidence applied in a novel platform with overconstrained robotic limbs.

Haoran Sun#, Linhan Yang#, Yuping Gu, Jia Pan*, Fang Wan*, and Chaoyang Song* (2023). “Bridging Locomotion and Manipulation Using Reconfigurable Robotic Limbs via Reinforcement Learning.” Biomimetics, 8(4), 364. doi: https://doi.org/10.3390/biomimetics8040364

We are working on a journal article to be submitted to Nature Machine Intelligence.

Besides the above work led by my lab, I also collaborated on the joint publication of a preprint article[9] in 2021 about a quadruped robot with a linearly actuated robotic limb capable of heavy payload walking using an adaptive control algorithm.

Omni-Directional Soft Adaptation & Proprioception

I took a learning-based approach to address the 2nd Research Question based on a novel soft structure I invented[10] with exceptional omni-directional adaption. After my doctoral study on the theoretical fundamentals of mechanism kinematics, my post-doctoral research on the design innovations of product development, and my lecturer appointment on the research and development of robot learning systems, I developed an understanding of the critical importance of interaction in robotics research, where the moment of touch, by the humans or by the robots, holds the truth connecting the physical embodiment of robot hardware, the algorithmic computation behind, and the human element involved, all within a specific set of environmental scenarios aiming at task completion. While humans have the skins to develop multiple dimensions of sensory details in one place, it remains a challenge for the robots at the tips of the limbs. It is interesting to see significant development in material science with novel sensor development. However, when it comes to the integration with robotics, I was not sure whether adding a lay of skin is the answer to it, as the current development of robotic hardware is far from a balanced point in terms of cost, functionality, simplicity, and convenience to the service of human life. The resources required to design fully functional humanoids for home use, such as those depicted in films such as I, robot, are still unrealistic considering the constant upgrade and infrastructure redevelopment towards a sustainable future of humanity on Earth (but maybe acceptable for planet Mars). Considering that this is a relatively new field of research for my track record, upon joining SUSTech, instead of directly diving into the research details, I filed a series of patent applications revolving around various aspects of this novel soft structure regarding its design principles and application scenarios. Then, I developed my research activities in two parts over the years to establish this work:

DeepClaw Systems for Learning-based Benchmarking

I developed the original DeepClaw system[11] at Monash when I first conducted learning-based research with all codes open-sourced at GitHub[12], but the supported hardware was limited to UR5, Robotiq FT300, Microsoft Kinect, and Robotiq 3Fringer that were available at my lab at SIR Group. After moving to SUSTech, I was able to purchase a series of collaborative robotic arms (i.e., 1*UR5, 1*UR10e, 3*Franka, 1*Denso Cobotta, 1*Yaskawa Motomini, 1*AUBO i5), gripers (i.e., OnRobot RG6, Robotiq HandE, DH AG-95), cameras (i.e., Photoneo M, Microsoft Kinect Azure, Intel RealSense series), and sensors (i.e., OnRobot HEX, ATI Nano17 & Nano25). The original purpose of developing the DeepClaw system was to support the rapid development and reconfiguration of manipulation platforms in a shareable and reproducible manner so that my students and I could conduct research more efficiently. After joining SUSTech, I spent a fair amount of time redesigning and recoding the hardware and software to support most of the robotic devices in my lab with an updated design using a standardized aluminum extrusion system for rapid reconfiguration. I also worked with my students and developed a browser-based CAD system to rapidly reconfigure the various standardized parts for system integration, which we are currently in the process of negotiating a commercial development through technology transfer with DH Robotics. I piloted the system to support my ME336 Collaborative Robot Learning teaching activities starting in the Spring of 2019[13], then published the DeepClaw system design at AIM2020[14].

Fang Wan, Haokun Wang, Xiaobo Liu, Linhan Yang, and Chaoyang Song* (2020). “DeepClaw: A Robotic Hardware Benchmarking Platform for Learning Object Manipulation.” IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Boston, MA, USA, 6-10 July 2020, pp. 2011-2018. doi: https://doi.org/10.1109/aim43001.2020.9159011
Haokun Wang, Xiaobo Liu, Nuofan Qiu, Ning Guo, Fang Wan, and Chaoyang Song* (2022). “DeepClaw 2.0: A Data Collection Platform for Learning Human Manipulation.” Frontiers in Robotics and AI, 9:787291. doi: https://doi.org/10.3389/frobt.2022.787291

During the integration of the DeepClaw system, or any vision-based robot manipulation platform, the accuracy and efficiency of hand-eye calibration still play a significant role in supporting the practical research and deployment, which we solved by proposing a novel flange-based calibration algorithm in a journal article we published in 2020[15]. Later, the development of the DeepClaw underwent a series of updates and redesign to suit the changing need of my lab in research and teaching. For example, we published DeepClaw 2.0 in 2022[16], which incorporates a pair of soft robotic tongs using our soft robotic fingers as a shareable and reproducible interface at low cost for rapid collection of grasping and object manipulation data by the human hand in terms of finger forces and arm movements at the same time, which was difficult and expensive to achieve using state-of-the-art motion capture systems with force sensors or data gloves. Although we developed an interface for data collection, it still requires a fair amount of installation and configuration that many of our students find challenging, especially when unfamiliar with development and debugging in the Linux environment.

Yujian Dong#, Tianyu Wu#, Yang Xiao, Jinqi Wei, Fang Wan*, and Chaoyang Song* (2023). “Vision-based, Low-cost, Soft Robotic Tongs for Shareable and Reproducible Tactile Learning.” IEEE International Conference on Robotics and Automation (ICRA). (Under Review)

During the COVID-19 period, I redesigned a portable version of the DeepClaw with my students to fit everything within a fanny pack and developed a browser-based interface so that students could plug and play with the data collection part involved in the learning and research of object manipulation even at home. We submitted a conference paper to ICARM2024[17] to share our work on this part. We have another paper recently accepted by ICDL2023[18] to investigate the stiffness distribution of the soft fingertip using vision-based sensing in the same way as the portable DeepClaw’s design.

Xudong Han, Sheng Liu, Fang Wan* and Chaoyang Song* (2023). “Vision-based Tactile Sensing for an Omni-adaptive Soft Finger.” IEEE International Conference on Development and Learning (ICDL). Macau, China, 9-11 November 2023.doi: https://doi.org/10.1109/ICDL55364.2023.10364455
Fang Wan and Chaoyang Song* (2020). “Flange-Based Hand-Eye Calibration Using a 3D Camera with High Resolution, Accuracy, and Frame Rate.” Frontiers in Robotics and AI, 7:65. doi: https://doi.org/10.3389/frobt.2020.00065
Nuofan Qiu, Fang Wan*, and Chaoyang Song* (2024). “Describing Robots from Design to Learning: Towards an Interactive Lifecycle Representation of Robots.” IEEE International Conference on Advanced Robotics and Mechatronics (ICARM2024). (Under Review)

Recently, we also initiated a collaboration with Autodesk and developed a plugin for Fusion360 to automatically convert Robot Description Files from CAD design to Simulation or Learning-based environments and submitted a conference paper to ICARM2024[19]. Recently, we published a conference paper at ICARM2023[20] to present a reconfigurable benchmarking method with DeepClaw’s hardware and software design using the jigsaw puzzle for various robot learning tasks in the object manipulation problem and was selected as the Best Conference Paper Finalist award[21]. Within our lab, we have also custom-designed and developed a dual-arm robot system actuated by Quasi-Direct Drives with reconfigurable joints for the DeepClaw 2.0’s robot pedestal, which we plan to further integrate for future research. Through the continuous and iterative design and development of the DeepClaw system, my research team and I have accumulated a rich set of practical skills, reusable tools, and functional prototypes to allow us to conduct further research in this new field, even though it was not reflected in my doctoral or post-doctoral training.

Xiaobo Liu#, Fang Wan#*, Sheng Ge, Haokun Wang, Haoran Sun, and Chaoyang Song (2023). “Jigsaw-based Benchmarking for Learning Robotic Manipulation.” IEEE International Conference on Advanced Robotics and Mechatronics (ICARM). Sanya, China, 8-10 July 2023, pp. 124-130. doi: https://doi.org/10.1109/ICARM58088.2023.10218784
Awards: Best Conference Paper Finalist

Learning-based Omni-Directional Adaptation and Interaction

Using the DeepClaw system, we conducted a series of learning-based research to investigate the design advantage of the soft omni-adaptive soft fingertip, which involves three collections of work that we have achieved over the years at SUSTech, spanning from 1) Direct Evaluation for Grasp Learning, 2) Optical-based Force and Tactile Sensing, to 3) Vision-based Proprioception from On-Land to Underwater.

Direct Evaluation for Grasp Learning

To directly evaluate the soft, omni-directional adaptive fingertip for grasp learning, we initiated our research with a journal article in IEEE Robotics and Automation Letters[22] in 2020 with a dual-track presentation to the ICRA2020. In this work, we proposed a rigid-soft interactive learning method to use grasping data collected from both common rigid and our soft fingers against rigid and soft objects for training a simple neural network for autonomous grasping, achieving an enhanced grasp success rate with a much-reduced size of training data.

Linhan Yang, Fang Wan, Haokun Wang, Xiaobo Liu, Yujia Liu, Jia Pan, and Chaoyang Song* (2020). “Rigid-Soft Interactive Learning for Robust Grasping.” IEEE Robotics and Automation Letters, 5(2):1720-1727. doi: https://doi.org/10.1109/lra.2020.2969932

We also published another journal article in IEEE Robotics and Automation Letters[23] in 2020 with a dual-track presentation to the RoboSoft2020. In this work, we designed a reconfigurable gripper using our soft fingers. We developed a learning-based benchmarking regarding the optimal number of fingers necessary for object grasping and found the data-driven proof that the 3-finger radial configuration is the most space-saving and cost-effective in achieving the best grasping success rate. The 4-finger radial arrangement is more suitable for higher payload scenarios, which is the first work providing an in-depth analysis of the design choice on how many fingers are necessary for robotic grasping.

Fang Wan, Haokun Wang, Jiyuan Wu, Yujia Liu, Sheng Ge, and Chaoyang Song* (2020). “A Reconfigurable Design for Omni-adaptive Grasp Learning.” IEEE Robotics and Automation Letters, 5(3):4210-4217. doi: https://doi.org/10.1109/lra.2020.2982059

Later, we also published a conference paper on RoboSoft2021[24] and a journal article[25] in 2021 to investigate the optimal design of finger surface patterns for enhanced object grasping in both on-land and underwater scenarios. We sourced design inspiration from the lobster claws biologically evolved with amphibian capabilities in dexterous object grasping. Using a learning-based training network to remove bias in grasping strategy, we found that the lobster-inspired patterns added to our finger surface provided a much-improved grasping success rate in both on-land and underwater scenarios and outperformed the best surface pattern design found by the UC Berkley’s team in the on-land grasping scenario. This research provided us with an in-depth understanding of the various design considerations of our soft fingertip for learning-based research in robotic grasping.

Haiyang Jiang, Yonglin Jing, Ning Guo, Weijie Guo, Fang Wan, and Chaoyang Song* (2021). “Lobster-inspired Finger Surface Design for Grasping with Enhanced Robustness,” IEEE International Conference on Soft Robotics (RoboSoft), New Haven, CT, USA, 12-16 April 2021, pp. 321-326. doi: https://doi.org/10.1109/RoboSoft51838.2021.9479215
Haiyang Jiang, Xudong Han, Yonglin Jing, Ning Guo, Fang Wan and Chaoyang Song* (2022). “Rigid–Soft Interactive Design of a Lobster-Inspired Finger Surface for Enhanced Grasping Underwater.” Frontiers in Robotics and AI, 8:787187. doi: https://doi.org/10.3389/frobt.2021.787187

Besides the above work led by my lab, I also conducted collaborative work with the joint publication of a journal article in IEEE Transactions on Robotics[26] in 2023 with Prof. Pan Jia from the University of Hong Kong on motion planning algorithms for object retrieval in cluttered scenes using hierarchical graphs, where the DeepClaw hardware was used in this paper to carry out the experimental validation of the algorithm. The lead author, Dr. Tian Hao, was a visiting scholar at my lab for two years while conducting this research from initial ideation to final publication. In the joint publication of a journal article on Robotica[27] in 2023 with Prof. Hu Chengzhi at my department, the DeepClaw system was also adopted for experimental validation of a calibrated analytical model for magnetic localization of wireless capsule endoscope. 

Optical-based Force and Tactile Sensing

While the above works using a single sensing modality from external vision, adding a sense of touch provides an added data dimension for learning the interactive dynamics of our soft robotic fingertip, which we explored by embedding optical fibers inside our design. Our initial attempt was published in a conference paper at RoboSoft2020[28], where we redesigned the soft structure from solid geometry to hollow and inserted optical fingers inside to sense the changes in the light signals transmitted through during the finger’s deformation. The resultant design achieved a reasonable level of perceiving the object’s shapes for classification. However, the performance depends on the training data collected under specific lighting conditions at the lab, which requires tedious data recollection and model retraining.

Zeyi Yang, Sheng Ge, Fang Wan, Yujia Liu, and Chaoyang Song* (2020). “Scalable Tactile Sensing for an Omni-adaptive Soft Robot Finger.” IEEE International Conference on Soft Robotics (RoboSoft), New Haven, CT, USA, 15 May-15 July 2020, pp. 572-577. doi: https://doi.org/10.1109/robosoft48309.2020.9116026

To investigate such design limitations, we first published a journal article in IEEE Robotics and Automation Letters[29] in 2021 with a dual-track presentation at RoboSoft2021 using a similar sensing strategy for a pneumatic actuator. We developed an analytical model using volumetric metrics with proprioceptive sensing. Later, we published another journal article in IEEE Robotics and Automation Letters[30] in 2021 with a dual-track presentation at ICRA2021, where we presented an enhanced design of the soft robotic finger with an optoelectronically innervated design integrated as a 3-finger gripper and learned accurate estimation of the contact position and force irrespective of the environmental lighting conditions. While optical integration is engineering capable of providing tactile sensing with high-performing accuracy, the design remains significantly limited to provide refined sufficient output rich enough to cover the whole finger; the optical components are way too large for an integrated solution, and the waterproofing all the electronics involved is neither elegant in design nor effective in practice.

Baiyue Wang, Weijie Guo, Shihao Feng, Hongdong Yi, Fang Wan, and Chaoyang Song* (2021). “Volumetrically Enhanced Soft Actuator with Proprioceptive Sensing.” IEEE Robotics and Automation Letters, 6(3):5284-5291.
doi: https://doi.org/10.1109/LRA.2021.3072859
Linhan Yang, Xudong Han, Weijie Guo, Fang Wan, Jia Pan, and Chaoyang Song* (2021). “Learning-based Optoelectronically Innervated Tactile Finger for Rigid- Soft Interactive Grasping.” IEEE Robotics and Automation Letters, 6(2):3817-3824. doi: https://doi.org/10.1109/LRA.2021.3065186

Besides the above work led by my lab, I also conducted multiple collaborative work with Prof. Wang Zheng at the University of Hong Kong on soft robotics. In the joint publication of a journal article on Soft Robotics[31] in 2018, we proposed the design of a fiber-reinforced origami actuator with exceptional performance in power-to-weight ratio. In the joint publication of a journal article in IEEE Transactions on Robotics[32] in 2019, we presented the feasibility of designing soft origami actuators that are 3D-printable for safe interactions. In the joint publication of a conference paper at RCAR2018[33], we developed an analytical model for the global dynamic balancing problem of the classical Acrobot.

Vision-based Proprioception from On-Land to Underwater

All previous work led me to the systematic investigation of a vision-based solution for proprioceptive learning with the soft, omni-adaptive robotic fingertip. We published the original design in a conference paper at CoRL2021[34], the leading conference in robot learning. In this paper, we present a preliminary implementation of our solution in two ways, one with a fiducial marker fixed inside the finger and the other without; both feature a miniature camera set at the bottom of the finger, capturing visual features of the soft finger’s omni-directional deformations. As a result, we were able to build a neural network to estimate the forces and torques during physical contacts at a high framerate and accuracy within a simple design, comparable to a force-torque sensor with added values in omni-directional adaptation, effectively transforming any rigid gripper into an adaptive and tactile solution that is ready for grasping objects of unstructured geometry. Following this work, we have submitted multiple manuscripts to different journals, addressing various aspects with demonstrated validations to showcase its potential in robot learning.

Fang Wan, Xiaobo Liu, Ning Guo, Xudong Han, Feng Tian, and Chaoyang Song* (2020). “Visual Learning Towards Soft Robot Force Control using a 3D Metamaterial with Differential Stiffness.” Proceedings of the 5th Conference on Robot Learning (CoRL), PMLR 164:1269-1278, 2022. URL: https://proceedings.mlr.press/v164/wan22a
  • We submitted a journal article to the International Journal of Robotics Research[35] in 2023, presenting the soft fingertip as a Soft Polyhedral Network with an embedded vision for physical interactions, capable of adaptive kinesthesia and viscoelastic proprioception by learning kinetic features. This design enables passive adaptations to omni-directional interactions, visually captured by a miniature high-speed motion tracking system embedded inside for proprioceptive learning. The results show that the soft network can infer real-time 6D forces and torques with accuracies of 0.25/0.24/0.35 N and 0.025/0.034/0.006 Nm in dynamic interactions. We also incorporate viscoelasticity in proprioception during static adaptation by adding a creep and relaxation modifier to refine the predicted results. The proposed soft network combines simplicity in design, omni-adaptation, and proprioceptive sensing with high accuracy, making it a versatile solution for robotics at a low cost, with more than 1 million use cycles for tasks such as sensitive and competitive grasping and touch-based geometry reconstruction. This study offers new insights into vision-based proprioception for soft robots in adaptive grasping, soft manipulation, and human-robot interaction.
Xiaobo Liu, Xudong Han, Wei Hong, Fang Wan*, and Chaoyang Song* (2023). “Proprioceptive Learning with Soft Polyhedral Networks.” The International Journal of Robotics Research. (Under Review)
  • We submitted a journal article to Advanced Intelligent Systems[36] in 2023 (Accepted) to investigate the transferability of grasping knowledge from on-land to underwater via a vision-based soft robotic finger that learns 6D forces and torques (FT) using a Supervised Variational Autoencoder (SVAE). A high-framerate camera captures the whole-body deformations while a soft robotic finger interacts with physical objects on land and underwater. Results show that the trained SVAE model learned a series of latent representations of the soft mechanics transferrable from ground to water, presenting a superior adaptation to the changing environments against commercial FT sensors. Soft, delicate, and reactive grasping enabled by tactile intelligence enhances the gripper’s underwater interaction with improved reliability and robustness at a much-reduced cost, paving the path for learning-based intelligent grasping to support fundamental scientific discoveries in environmental and ocean research.
Ning Guo, Xudong Han, Xiaobo Liu, Shuqiao Zhong, Zhiyuan Zhou, Jian Lin, Jiansheng Dai, Fang Wan*, and Chaoyang Song* (2023). “Autoencoding a Soft Touch to Learn Grasping from On-land to Underwater.” Advanced Intelligent Systems, 2300382.
doi: https://doi.org/10.1002/aisy.202300382
  • We submitted a journal article to Soft Robotics[37] in 2023, presenting the SeeThruFinger, a soft robotic finger with an in-finger vision for multi-modal perception, including visual and tactile sensing, for geometrically adaptive and real-time reactive grasping. Multi-modal perception of intrinsic and extrinsic interactions is critical in building intelligent robots that learn. Instead of adding various sensors for different modalities, a preferred solution is to integrate them into one elegant and coherent design, which is a challenging task. This study leverages the Soft Polyhedral Network design as a robotic finger, capable of omni-directional adaptation with an unobstructed view of the finger’s spatial deformation from the inside. By embedding a miniature camera underneath, we achieve the visual perception of the external environment by inpainting the finger mask using E2FGV, which can be used for object detection in the downstream tasks for grasping. After contacting the objects, we use real-time object segmentation algorithms, such as XMem, to track the soft finger’s spatial deformations. We also learned a Supervised Variational Autoencoder to enable tactile sensing of 6D forces and torques for reactive grasp. As a result, we achieve multi-modal perception, including visual perception and tactile sensing, and soft, adaptive object grasping within a single vision-based soft finger design compatible with multi-fingered robotic grippers.
Fang Wan, and Chaoyang Song* (2023). “SeeThruFinger: See and Grasp Anything with a Soft Touch.” Soft Robotics. (Under Review)
  • We submitted a journal article to Biomimetics[38] in 2023, presenting a novel design of the soft finger that integrates inner vision with kinesthetic sensing to estimate object pose inspired by human fingers. The soft finger has a flexible skeleton and skin that adapts to different objects, and the skeleton deformations during interaction provide contact information obtained by the image from the inner camera. The proposed framework is an end-to-end method that uses raw images from soft fingers to estimate in-hand object pose. It consists of an encoder for kinesthetic information processing and an object pose and category estimator. The framework was tested on seven objects, achieving an impressive error of 2.02 mm and 11.34 degrees for pose error and 99.05% for classification.
Xiaobo Liu, Xudong Han, Ning Guo, Fang Wan*, and Chaoyang Song* (2023). “Bio-inspired Proprioceptive Touch of a Soft Finger with Inner-Finger Kinesthetic Perception.” Biomimetics, 8(6), 501. doi: https://doi.org/10.3390/biomimetics8060501
  • We submitted a conference paper to ICRA2024[39], presenting the design of a soft robotic finger with an active surface on an omni-adaptive network structure, which can be easily installed on existing grippers and achieve stability and dexterity for in-hand manipulation. The system’s functional surfaces initially transfer the object from the fingertip segment with less compliance to the middle part of the finger with superior adaptability. Despite the omni-directional deformation of the finger, in-hand manipulation can still be executed with controlled active surfaces. We characterized the soft finger’s stiffness distribution and simplified models to assess the feasibility of repositioning and reorienting a grasped object. A set of experiments on in-hand manipulation was performed with the proposed fingers, demonstrating the dexterity and robustness of the strategy.
Sen Li, Fang Wan*, and Chaoyang Song* (2023). “Active Surface with Passive Omni-Directional Adaptation of Soft Polyhedral Fingers for In-Hand Manipulation.” IEEE/IFToMM International Conference on Reconfigurable Mechanisms and Robots (ReMAR2024). (To be submitted)
  • We submitted another journal article to the International Journal of Mechanical Sciences[40], presenting a new vision-based proprioceptive soft finger with shape and touch estimating abilities. The finger design enhanced adaptability in bending, twisting, and enveloping during interactions, inherited and improved from the Fin Ray Effect. We developed vision-based proprioceptive sensing to estimate shape deformation and touch position. The approach to assessing shape is based on constrained geometric optimization, which approximates ArUco markers’ poses obtained by a monocular camera under the finger as aggregated multi-handles (AMHs) to drive the deformation of the finger mesh. A data-driven learning model is also proposed to estimate touch position with markers’ pose data, which achieves reliable results as R2 scores are 0.9657, 0.9464, and 0.9406 along the x, y, and z directions. Another task for dynamic touch path sensing also shows the robustness of the proposed method. The soft finger’s superior proprioceptive sensing capability is ideal for precise and dexterous robotic manipulation tasks.
Ning Guo#, Xudong Han#, Shuqiao Zhong, Zhiyuan Zhou, Jian Lin, Fang Wan*, and Chaoyang Song* (2023). “Reconstructing Touch for Metaverse by Visual Proprioception.” International Journal of Mechanical Sciences. (Under Review)
  • Currently, we are working on a journal article to be submitted to the special issue on tactile robotics at the IEEE Transactions on Robotics. The control of soft robots relies on accurate state estimation of complex soft deformations with infinite degrees of freedom. Compared to exteroceptive methods of state estimation, proprioceptive ways use embedded sensors such as optic fibers and strain sensors to reconstruct the 3-dimensional shape. They are more robust and transferrable to environmental change but suffer from complex fabrication and poor durability as the sensor and the delicate soft body are usually integrated inseparably. It remains a challenge to harvest both benefits. In this paper, we propose a novel approach of real-time state estimation of an omni-adaptive soft finger using a single in-finger monocular camera. The modularized design of the sensorized finger is sharable being easy to fabricate and sustainable as the sensor camera is detachable and reusable. We describe a method of volumetric discretized model of the soft finger and use the geometry constraints captured by the camera to find the optimal estimation of the deformed shape. The approach is benchmarked with a motion tracking system with sparse markers and a haptic device with dense measurements. Both results show state-of-the-art accuracies. More importantly, the state estimation is robust in both on-land and underwater environments as we demonstrate its usage for underwater object shape sensing.

Besides the above work led by my lab, I also conducted multiple collaborations on tactile robotics. In the joint publication of a journal article on Science Robotics[41] in 2021 with Prof. Pan Jia from the University of Hong Kong and Prof. Shen Yajing from the Hong Kong Polytechnic University, we presented a soft magnetic skin with super-resolution tactile sensing and force self-decoupling using machine learning. The lead author, Dr. Yan Youcan, was a visiting scholar at my lab for two years as a doctoral student and conducted this research from sensor development to journal publication. Follow-up research of this work was published in a joint journal article on IEEE Robotics and Automation Letters[42] in 2022 with a dual-track presentation at ICRA2022, presenting further details on a tactile super-resolution model for the soft magnetic skin. In a joint publication with Dr. Huang Bidan from the Tencent Robotics-X Lab, we jointly published a journal article on IEEE Robotics and Automation Letters[43] in 2023 with a dual-track presentation at IROS2023, presenting the TacGNN framework for learning tactile-based in-hand manipulation using hierarchical Graph Neural Network without machine vision. The lead author, Mr. Yang Linhan, is a doctoral student jointly supervised by me and Prof. Pan Jia at the University of Hong Kong.

Reconfigurable Design of the Underwater Superlimb

I took a design and learning approach to address the 3rd Research Question via collaboration with Prof. Harry Asada at MIT and Prof. Fu Chenglong at SUSTech through a grant awarded by the SUSTech-MIT Joint Centers on Mechanical Engineering Research and Education to develop novel solutions for wearable technologies for the good of humanities. Before this collaboration, I initiated a few independent research projects at Monash as the Principal Investigator by proposing a rigid-soft hybrid actuation design inspired by the lobster’s abdomen between the exterior shells and internal muscles, with published results including a journal article in the Journal of Micromechanics and Microengineering[44] in 2017 on the theoretical kinematics of this novel actuator, and a series of conference papers such as an ICRA2017[45] paper on the actuator’s bio-inspired design, an IDETC2017[46] paper on the mechanics modeling, another ICRA2017[47] paper on the design integration as a robotic glove for hand rehabilitation, and a ROBIO2017[48] paper on the design integration as a powerful exoskeleton for the lower-limb for gait augmentation. Through the platform jointly created by SUSTech and MIT, I revisited my post-doctoral research and published a journal article in the IEEE Transactions on Engineering Management[49] in 2022 on product design and development by mining data collected from the crowd-funding platforms. I also established new research directions on the supernumerary robotic limbs, an emerging class of wearable robots that combines the design of robotic limbs wearable to the human body and the mechanism research of intention recognition for human-robot interactions. I developed my research activities in two aspects over the years to establish this work:

SuperLimbs for the Elderly with Ambient Intelligence

Despite my track record on wearable robots, this project was still challenging to explore new research frontiers while finding suitable alignments with the research strength within my lab. During the initial discussion on this collaboration, Prof. Asada presented the sit-to-stand problem faced by the elderly with reduced motor functions in the lower limbs due to stroke or aging. Falling accidents occurring during sit-to-stand transition are statistically reported as the No. 1 cause of death among the elderly. I related to this problem instantly as my grandfather passed away due to the exact reason for falling during sit-to-stand transitions. During IROS2019, Prof. Asada hosted the 1st Workshop on Supernumerary Robotic Limb and invited me as a guest speaker to present our collaborative work on the SuperCane system[50]. Later, in a conference paper we published on RoboSoft2020[51], we formally presented this explorative design of an ambient SuperLimb system that involves a pneumatically driven robotic cane for at-home motion assistance, an inflatable vest for compliant human-robot interaction, and a depth sensor for ambient intention detection. The proposed system aims at providing active assistance during the sit-to-stand transition for at-home usage by the elderly at the bedside, in the chair, and on the toilet. We proposed a modified biomechanical model with a linear cane robot for closed-loop control implementation. We validated the design feasibility of the proposed ambient SuperLimb system, including the biomechanical model. Our result showed the advantages in reducing lower limb efforts and elderly fall risks, yet the detection accuracy using depth sensing and adjustments on the model still require further research. We summarized empirical guidelines to support the ambient design of elderly assistive SuperLimb systems for lower limb functional augmentation.

Xia Wu, Haiyuan Liu, Ziqi Liu, Mingdong Chen, Fang Wan, Chenglong Fu, Harry Asada, Zheng Wang, and Chaoyang Song* (2020). “Robotic Cane as a Soft SuperLimb for Elderly Sit-to-Stand Assistance.” IEEE International Conference on Soft Robotics (RoboSoft), New Haven, CT, USA, 15 May-15 July 2020, pp. 599-606. doi: https://doi.org/10.1109/robosoft48309.2020.9116028

SuperLimbs for the Divers with Multi-Modal Interactions

During the second stage of this collaboration, I proposed a new direction by pushing the concept of SuperLimbs to the underwater scenario as an intelligent wearable based on my diving experience in the Singapore, Malaysia, Indonesia, Thailand, and New Zealand, when I was studying and working in Singapore. After two years of research and development, in a journal article we published in Advanced Intelligent Systems[52] in 2023, we present the design of an underwater superlimb as a wearable robot, providing divers with mobility assistance and freeing their hands for manipulating tools underwater. The wearable design features a thrust vectoring system with two 3D-printed, waterproofed modules. The module with adjustable connections and strapping holes is designed to enable reconfiguration for multiple purposes, including regular use as an underwater superlimb for divers, manually operated as a handheld glider for swimmers, combined with an amphibian, legged robot as a quadruped superlimb, and coupled as a dual-unit autonomous underwater vehicle for underwater navigation. The kinematics and dynamics of the prototype and all of its reconfigured modes are developed. A sliding-mode controller is also introduced to achieve stable simulation in PyBullet. Field tests further support the feasibility of the underwater superlimb when worn on a test diver in a swimming pool. As the first underwater superlimb presented in the literature, this study opens new doors for supernumerary robotic limbs in underwater scenarios with multi-functional reconfiguration.

Jiayu Huo#, Jingran Wang#, Yuqin Guo, Wanghongjie Qiu, Mingdong Chen, Harry Asada, Fang Wan*, and Chaoyang Song* (2023). “Reconfigurable Design and Modeling of an Underwater Superlimb for Diving Assistance.” Advanced Intelligent Systems, 2300245. doi: https://doi.org/10.1002/aisy.202300245

A follow-up of this work was recently accepted as a conference paper at CASE2023[53], where we present a multi-modal mechanism for recognizing human intentions while diving underwater, aiming to achieve natural human-robot interactions through an underwater SuperLimb for diving assistance. The aquatic environment severely limits the divers’ capabilities in intention expression, which becomes more challenging when they intend to operate tools while keeping control of body postures in 3D with the various diving suits and gears. The current literature is limited in underwater intention recognition, impeding the development of intelligent wearable systems for human-robot interactions underwater. This paper presents a novel solution to simultaneously detect head motion and throat vibrations under the water in a compact, wearable design. Experiment results show that using machine learning algorithms, we achieved high performance in integrating these two modalities to translate human intentions to robot control commands for an underwater SuperLimb system. This study’s results paved the way for future development in underwater intention recognition and underwater human-robot interactions with supernumerary support. This paper just won the Best Healthcare Automation Paper Award[54] at the CASE2023 conference held in New Zealand. Currently, we are further investigating the application of this novel mechanism of intention recognition to push its application from underwater to on-land scenarios with successful implementations on robotic arms or cars other than the underwater superlimbs, and we are working on a journal article to be submitted to the IEEE Transactions on Robotics. 

Yuqin Guo#, Rongzheng Zhang#, Wanghongjie Qiu, Harry Asada, Fang Wan* and Chaoyang Song* (2023). “Underwater Intention Recognition using Head Motion and Throat Vibration for Supernumerary Robotic Assistance.” IEEE International Conference on Automation Science and Engineering (CASE). Auckland, New Zealand, on 26-30 August 2023. doi: https://doi.org/10.1109/CASE56687.2023.10260480
Award: Best Healthcare Automation Paper Award

Besides the above work led by my lab, I also conducted multiple collaborations on wearable robotics. In the joint publication of a journal article on IEEE/ASME Transactions on Mechatronics[55] in 2020 with Prof. Wang Zheng at my department, we present a soft robotic SuperFlipper inspired by the Otariidae using fabric kirigami and origami. In the joint publication of a conference paper published at DETC2022[56] with Prof. Wang Zheng at my department, we present a wearable exoskeleton for safe motion assistance with variable stiffness modulation. In the joint conference paper published at RCAR2019[57] with Prof. Hu Xinyao from Shenzhen University, we present a soft ankle wearable device powered by Pneumatic Artificial Muscles. In the joint publication of a preprint article[58] in 2020 with Prof. Fu Chenglong at my department for the AIM2020 Workshop on Supernumerary Robotic Limbs, we present an investigation to map human muscle forces to a SuperArm for overhead task assistance.

Future Research

Through the above research that I had conducted with my lab at SUSTech, we have established an in-depth understanding of fundamental theories and applied research with proven prototypes to address the three research questions through an interdisciplinary approach combining bionic design and robot learning, contributing to the research theme of advanced robotics with cross-domain capabilities in limb mechanisms, operational scenarios, and interactive skills.

  • Through the research on the Overconstrained Design & Learning of Robotic Limbs, we have identified a learning-based skill transfer between the fingers’ object manipulation and the legs’ terrain locomotion via the parametric and modular reconfiguration of a novel design of overconstrained robotic limbs, paving the path towards an exciting research field in Overconstrained Robotics, where the transferrable skills could be achieved via Overconstrained Limbs for energy-efficient movement in motion planning,  Graph Neural Networks for topologically explainable RL in robot modeling, and Multilayer Perceptron for computationally efficient RL in Sim2Real implementation.
  • Through the research on the Omni-Directional Soft Adaptation & Proprioception, we have developed a vision-based proprioceptive learning method leveraging our patented design of a class of soft, omni-adaptive networks as the fingertips, providing accurate, real-time, and robust force-torque sensing in 6D with enabling adaption for unstructured interactions at a much reduced cost of integration while generating a rich set of interaction data on the surface of a physical touch in both on-land and underwater scenarios.
  • Through the research on the Reconfigurable Design of the Underwater SuperLimb, we have prototyped a multi-functional system within a single design to enable various interactions with both human and robot agents using award-winning methods in multi-modal intention recognition combining throat vibration and head motion in both on-land and underwater scenarios.

Our current research is still limited in the design integration of the above findings into a single robot, which is what we plan to further research in future work. Below is an outlook of our research at BionicDL in the future, aiming at the Bionic Design and Learning of SOPHIE, i.e., the SUSTech Lobster Robot for Learning in Amphibious Environment.

Concept Drawing by Mr. Chen Mingdong, an undergrad at BionicDL majored in Mechanical Engineering and current a Master student at Brown University in Industrial Design.

[1] The original lab name was Sustainable + Intelligent Robotics Group when I started independent research at Monash University as a Principal Investigator. Later, I updated the lab name to Bionic Design and Learning Lab as the newly formed SUSTech Institute of Robotics shares the same acronym. However, I think the original name better captures the research aim of my lab, as I personally believe that a sustainable approach combining computational intelligence is of fundamental importance towards the future of modern robotics to design robots with a balanced consideration between the critical functionality and system complexity within a limited resources for a wide range of interactions. The website URL combines Monash University motto in Latin (Ancora Imparo) and my lab acronym (SIR) at Monash. The Monash University’s motto, ‘Ancora Imparo’, means ‘I am still learning’. I used AncoraSIR for my lab URL since then, and carried it on to SUSTech with BionicDL just to avoid registering another URL.

[2] https://www.youtube.com/watch?v=tMDw57CWH7U

[3] https://bionicdl.ancorasir.com/wp-content/uploads/2023/09/2013-T-NTU-KinematicsDesign.pdf

[4] Shihao Feng, Yuping Gu, Weijie Guo, Yuqin Guo, Fang Wan, Jia Pan, and Chaoyang Song* (2021). “An Overconstrained Robotic Leg with Coaxial Quasi-direct Drives for Omni-directional Ground Mobility.” IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May-5 June 2021, pp. 11477-11484. doi: https://doi.org/10.1109/ICRA48506.2021.9561829

[5] Yuping Gu, Shihao Feng, Yuqin Guo, Fang Wan, Jian S. Dai, Jia Pan, and Chaoyang Song* (2022). “Overconstrained Coaxial Design of Robotic Legs with Omni-directional Locomotion.” Mechanism and Machine Theory, 176:105018. doi: https://doi.org/10.1016/j.mechmachtheory.2022.105018

[6] Yuping Gu, Ziqian Wang, Shihao Feng, Haoran Sun, Haibo Lu, Jia Pan*, Fang Wan*, and Chaoyang Song* (2023). “Computational Design Towards Energy Efficient Optimization in Overconstrained Robotic Limbs.” Journal of Computational Design and Engineering, 10: 1-16. doi: https://doi.org/10.1093/jcde/qwad083

[7] Ronghan Xu#, Jiayi Yin#, Shihao Feng#, Bangchao Huang, Haoran Sun, Jia Pan, Fang Wan*, and Chaoyang Song* (2023). “Overconstrained Robotic Limb with Energy-Efficient, Omni-directional Locomotion.” IEEE Transactions on Robotics, (Under Review).

[8] Haoran Sun#, Linhan Yang#, Yuping Gu, Jia Pan*, Fang Wan*, and Chaoyang Song* (2023). “Bridging Locomotion and Manipulation Using Reconfigurable Robotic Limbs via Reinforcement Learning.” Biomimetics, 8(4), 364. doi: https://doi.org/10.3390/biomimetics8040364

[9] Bingchen Jin, Yueheng Zhou, Ye Zhao, Ming Liu, Chaoyang Song, Jianwen Luo (2021). “An Adaptive Control Algorithm for Quadruped Locomotion with Proprioceptive Linear Legs.” arXiv, doi: https://doi.org/10.48550/arXiv.2107.12482

[10] https://patents.google.com/patent/US11467594B2/en?oq=US+11467594-B2

[11] Fang Wan and Chaoyang Song* (2018). “A Neural Network with Logical Reasoning based on Auxiliary Inputs.” Frontiers in Robotics and AI, 5:86. doi: https://doi.org/10.3389/frobt.2018.00086

[12] https://github.com/bionicdl-sustech/DeepClaw

[13] https://me336.ancorasir.com/?page_id=131

[14] Fang Wan, Haokun Wang, Xiaobo Liu, Linhan Yang, and Chaoyang Song* (2020). “DeepClaw: A Robotic Hardware Benchmarking Platform for Learning Object Manipulation.” IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Boston, MA, USA, 6-10 July 2020, pp. 2011-2018. doi: https://doi.org/10.1109/aim43001.2020.9159011

[15] Fang Wan and Chaoyang Song* (2020). “Flange-Based Hand-Eye Calibration Using a 3D Camera with High Resolution, Accuracy, and Frame Rate.” Frontiers in Robotics and AI, 7:65. doi: https://doi.org/10.3389/frobt.2020.00065

[16] Haokun Wang, Xiaobo Liu, Nuofan Qiu, Ning Guo, Fang Wan, and Chaoyang Song* (2022). “DeepClaw 2.0: A Data Collection Platform for Learning Human Manipulation.” Frontiers in Robotics and AI, 9:787291. doi: https://doi.org/10.3389/frobt.2022.787291

[17] Yujian Dong#, Tianyu Wu#, Yang Xiao, Jinqi Wei, Fang Wan*, and Chaoyang Song* (2023). “Vision-based, Low-cost, Soft Robotic Tongs for Shareable and Reproducible Tactile Learning.” IEEE International Conference on Robotics and Automation (ICRA). (Under Review)

[18] Xudong Han, Sheng Liu, Fang Wan* and Chaoyang Song* (2023). “Vision-based Tactile Sensing for an Omni-adaptive Soft Finger.” IEEE International Conference on Development and Learning (ICDL). Macau, China, on 9-11 November 2023 (Accepted).

[19] Nuofan Qiu, Fang Wan*, and Chaoyang Song* (2023). “Describing Robots from Design to Learning: Towards an Interactive Lifecycle Representation of Robots.” IEEE International Conference on Robotics and Automation (ICRA). (Under Review)

[20] Xiaobo Liu#, Fang Wan#*, Sheng Ge, Haokun Wang, Haoran Sun, and Chaoyang Song (2023). “Jigsaw-based Benchmarking for Learning Robotic Manipulation.” IEEE International Conference on Advanced Robotics and Mechatronics (ICARM). Sanya, China, 8-10 July 2023, pp. 124-130. doi: https://doi.org/10.1109/ICARM58088.2023.10218784

[21] https://bionicdl.ancorasir.com/wp-content/uploads/2023/08/2023-C-ICARM-–-JigsawLearning-Award.pdf

[22] Linhan Yang, Fang Wan, Haokun Wang, Xiaobo Liu, Yujia Liu, Jia Pan, and Chaoyang Song* (2020). “Rigid-Soft Interactive Learning for Robust Grasping.” IEEE Robotics and Automation Letters, 5(2):1720-1727. 
doi: https://doi.org/10.1109/lra.2020.2969932 (Dual-track Presentation at ICRA 2020)

[23] Fang Wan, Haokun Wang, Jiyuan Wu, Yujia Liu, Sheng Ge, and Chaoyang Song* (2020). “A Reconfigurable Design for Omni-adaptive Grasp Learning.” IEEE Robotics and Automation Letters, 5(3):4210-4217. doi: https://doi.org/10.1109/lra.2020.2982059 (Dual-track Presentation at RoboSoft 2020)

[24] Haiyang Jiang, Yonglin Jing, Ning Guo, Weijie Guo, Fang Wan, and Chaoyang Song* (2021). “Lobster-inspired Finger Surface Design for Grasping with Enhanced Robustness,” IEEE International Conference on Soft Robotics (RoboSoft), New Haven, CT, USA, 12-16 April 2021, pp. 321-326. doi: https://doi.org/10.1109/RoboSoft51838.2021.9479215

[25] Haiyang Jiang, Xudong Han, Yonglin Jing, Ning Guo, Fang Wan and Chaoyang Song* (2022). “Rigid–Soft Interactive Design of a Lobster-Inspired Finger Surface for Enhanced Grasping Underwater.” Frontiers in Robotics and AI, 8:787187. doi: https://doi.org/10.3389/frobt.2021.787187

[26] Hao Tian, Chaoyang Song, Changbo Wang, Xinyu Zhang, and Jia Pan* (2022). “Sampling-Based Planning for Retrieving Near-Cylindrical Objects in Cluttered Scenes Using Hierarchical Graphs,” IEEE Transactions on Robotics, 39(1):165-182, Feb. 2023. doi: https://doi.org/10.1109/TRO.2022.3191596

[27] You Li, Zhuokang Huang, Xiaobo Liu, Yu Jie, Chaoyang Song, and Chengzhi Hu* (2023). “Calibrated Analytical Model for Magnetic Localization of Wireless Capsule Endoscope based on Onboard Sensing.” Robotica, 41, pp. 1500–1514. doi: https://doi.org/10.1017/S0263574722001849

[28] Zeyi Yang, Sheng Ge, Fang Wan, Yujia Liu, and Chaoyang Song* (2020). “Scalable Tactile Sensing for an Omni-adaptive Soft Robot Finger.” IEEE International Conference on Soft Robotics (RoboSoft), New Haven, CT, USA, 15 May-15 July 2020, pp. 572-577. doi: https://doi.org/10.1109/robosoft48309.2020.9116026

[29] Baiyue Wang, Weijie Guo, Shihao Feng, Hongdong Yi, Fang Wan, and Chaoyang Song* (2021). “Volumetrically Enhanced Soft Actuator with Proprioceptive Sensing.” IEEE Robotics and Automation Letters, 6(3):5284-5291.  doi: https://doi.org/10.1109/LRA.2021.3072859 (Dual-track Presentation at RoboSoft2021)

[30] Linhan Yang, Xudong Han, Weijie Guo, Fang Wan, Jia Pan, and Chaoyang Song* (2021). “Learning-based Optoelectronically Innervated Tactile Finger for Rigid- Soft Interactive Grasping.” IEEE Robotics and Automation Letters, 6(2):3817-3824. doi: https://doi.org/10.1109/LRA.2021.3065186 (Dual-track Presentation at ICRA2021)

[31] Juan Yi, Xiaojiao Chen, Chaoyang Song, and Zheng Wang* (2018). “Fiber-Reinforced Origamic Robotic Actuator.” Soft Robotics, 5(1):81–92. doi: https://doi.org/10.1089/soro.2016.0079

[32] Juan Yi, Xiaojiao Chen, Chaoyang Song, Jianshu Zhou, Yujia Liu, Sicong Liu, and Zheng Wang* (2019). “Customizable Three-Dimensional-Printed Origami Soft Robotic Joint with Effective Behavior Shaping for Safe Interactions.” IEEE Transactions on Robotics, 35(1):114-123. doi: https://doi.org/10.1109/tro.2018.2871440

[33] Xiaojiao Chen, Tommy Hu, Chaoyang Song, and Zheng Wang* (2018). “Analytical Solution to Global Dynamic Balance Control of the Acrobot.” IEEE International Conference on Real-time Computing and Robotics (RCAR), Kandima, Maldives, 1-5 August 2018, pp. 405-410. doi: https://doi.org/10.1109/rcar.2018.8621827

[34] Fang Wan, Xiaobo Liu, Ning Guo, Xudong Han, Feng Tian, and Chaoyang Song* (2020). “Visual Learning Towards Soft Robot Force Control using a 3D Metamaterial with Differential Stiffness.” Proceedings of the 5th Conference on Robot Learning (CoRL), PMLR 164:1269-1278, 2022. URL: https://proceedings.mlr.press/v164/wan22a

[35] Xiaobo Liu, Xudong Han, Wei Hong, Fang Wan*, and Chaoyang Song* (2023). “Proprioceptive Learning with Soft Polyhedral Networks.” The International Journal of Robotics Research. (Under Review)

[36] Ning Guo, Xudong Han, Xiaobo Liu, Shuqiao Zhong, Zhiyuan Zhou, Jian Lin, Jiansheng Dai, Fang Wan*, and Chaoyang Song* (2023). “Autoencoding a Soft Touch to Learn Grasping from On-land to Underwater.” Advanced Intelligent Systems, 2300382. doi: https://doi.org/10.1002/aisy.202300382

[37] Fang Wan, and Chaoyang Song* (2023). “SeeThruFinger: See and Grasp Anything with a Soft Touch.” Soft Robotics. (Under Review)

[38] Xiaobo Liu, Xudong Han, Ning Guo, Fang Wan*, and Chaoyang Song* (2023). “Bio-inspired Proprioceptive Touch of a Soft Finger with Inner-Finger Kinesthetic Perception.” Biomimetics. (Accepted)

[39] Sen Li, Fang Wan*, and Chaoyang Song* (2023). “Active Surface with Passive Omni-Directional Adaptation of Soft Polyhedral Fingers for In-Hand Manipulation.” IEEE International Conference on Robotics and Automation (ICRA). (Under Review)

[40] Ning Guo#, Xudong Han#, Shuqiao Zhong, Zhiyuan Zhou, Jian Lin, Fang Wan*, and Chaoyang Song* (2023). “Reconstructing Touch for Metaverse by Visual Proprioception.” Advanced Engineering Informatics. (Under Review)

[41] Youcan Yan, Zhe Hu, Zhengbao Yang, Wenzhen Yuan, Chaoyang Song, Jia Pan*, and Yajing Shen* (2021). “Soft magnetic skin for super-resolution tactile sensing with force self-decoupling.” Science Robotics, 6(51): eabc8801. doi: https://doi.org/10.1126/scirobotics.abc8801

[42] Youcan Yan, Yajing Shen, Chaoyang Song, and Jia Pan* (2022). “Tactile Super-Resolution Model for Soft Magnetic Skin.” IEEE Robotics and Automation Letters, 7(2):2589-2596. doi: https://doi.org/10.1109/LRA.2022.3141449 (Dual-track Presentation at ICRA2022)

[43] Linhan Yang, Bidan Huang*, Qingbiao Li, Ya-Yen Tsai, Wang Wei Lee, Chaoyang Song, and Jia Pan (2023). “TacGNN: Learning Tactile-based In-hand Manipulation with a Blind Robot using Hierarchical Graph Neural Network.” IEEE Robotics and Automation Letters, 8(6): 3605-3612. 
doi: https://doi.org/10.1109/LRA.2023.3264759 (Dual-track Presentation at IROS2023)

[44] Yaohui Chen, Fang Wan, Tong Wu, and Chaoyang Song* (2017). “Soft-rigid Interaction Mechanism towards a Lobster-inspired Hybrid Actuator.” Journal of Micromechanics and Microengineering, 28(1):014007. doi: https://doi.org/10.1088/1361-6439/aa9e25

[45] Yaohui Chen, Sing Le, Qiao Chu Tan, Oscar Lau, Fang Wan, and Chaoyang Song* (2017). “A Reconfigurable Hybrid Actuator with Rigid and Soft Components.” IEEE International Conference on Robotics and Automation (ICRA), Marina Bay Sands, Singapore, 29 May-3 June 2017, pp. 58-63. doi: https://doi.org/10.1109/icra.2017.7988691

[46] Yaohui Chen, Sing Le, Qiao Chu Tan, Oscar Lau, and Chaoyang Song* (2017). “A Lobster-Inspired Hybrid Actuator with Rigid and Soft Components.” ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. Volume 5B: 41st Mechanisms and Robotics Conference (DETC). Cleveland, Ohio, USA. 6-9 August 2017. V05BT08A023. doi: https://doi.org/10.1115/detc2017-68082

[47] Yaohui Chen, Sing Le, Qiao Chu Tan, Oscar Lau, Fang Wan, and Chaoyang Song* (2017). “A Lobster-inspired Robotic Glove for Hand Rehabilitation.” IEEE International Conference on Robotics and Automation (ICRA), Marina Bay Sands, Singapore, 29 May-3 June 2017, pp. 111-116. doi: https://doi.org/10.1109/icra.2017.7989556

[48] Fang Wan, Zheng Wang, Brooke Franchuk, Xinyao Hu, Zhenglong Sun, and Chaoyang Song* (2017). “Hybrid Actuator Design for a Gait Augmentation Wearable.” IEEE International Conference on Robotics and Biomimetics (ROBIO), Macau, Macao, 5-8 December 2017, pp. 2298-2302. doi: https://doi.org/10.1109/robio.2017.8324761

[49] Chaoyang Song*, Jianxi Luo, Katja Hölttä-Otto, Warren Seering, and Kevin Otto (2022). “Crowdfunding for Design Innovation: Prediction Model with Critical Factors.” IEEE Transactions on Engineering Management, 69(4):1565-1576. doi: https://doi.org/10.1109/tem.2020.3001764

[50] Song Chaoyang, Robotic Cane as an Ambient Super-Limb for Assistive Elderly Motion Transition, 2019 IROS Workshop on Supernumerary Robotic Limb. Source: https://darbelofflab.mit.edu/2019-iros-workshop/

[51] Xia Wu, Haiyuan Liu, Ziqi Liu, Mingdong Chen, Fang Wan, Chenglong Fu, Harry Asada, Zheng Wang, and Chaoyang Song* (2020). “Robotic Cane as a Soft SuperLimb for Elderly Sit-to-Stand Assistance.” IEEE International Conference on Soft Robotics (RoboSoft), New Haven, CT, USA, 15 May-15 July 2020, pp. 599-606. doi: https://doi.org/10.1109/robosoft48309.2020.9116028

[52] Jiayu Huo#, Jingran Wang#, Yuqin Guo, Wanghongjie Qiu, Mingdong Chen, Harry Asada, Fang Wan*, and Chaoyang Song* (2023). “Reconfigurable Design and Modeling of an Underwater Superlimb for Diving Assistance.” Advanced Intelligent Systems, 2300245. doi: https://doi.org/10.1002/aisy.202300245

[53] Yuqin Guo#, Rongzheng Zhang#, Wanghongjie Qiu, Harry Asada, Fang Wan* and Chaoyang Song*(2023). “Underwater Intention Recognition using Head Motion and Throat Vibration for Supernumerary Robotic Assistance.” IEEE International Conference on Automation Science and Engineering (CASE). Auckland, New Zealand, on 26-30 August 2023. doi: https://doi.org/10.1109/CASE56687.2023.10260480

[54] https://bionicdl.ancorasir.com/wp-content/uploads/2023/09/2023-C-CASE-ThroatHead-Award.pdf

[55] Sicong Liu, Yuming Zhu, Zicong Zhang, Zhonggui Fang, Jiyong Tan, Jing Peng, Chaoyang Song, Harry Asada, and Zheng Wang* (2020). “Otariidae-inspired Soft-robotic Supernumerary Flippers by Fabric Kirigami and Origami.” IEEE/ASME Transactions on Mechatronics, 26(5):2747-2757. doi: https://doi.org/10.1109/TMECH.2020.3045476

[56] Juan Yi, Xiaojiao Chen, Zhonggui Fang, Yujia Liu, Dehao Duanmu, Yinyin Su, Chaoyang Song, Sicong Liu, Zheng Wang* (2022). “A Soft Wearable Elbow Skeleton for Safe Motion Assistance by Variable Stiffness.” ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (DETC). Cleveland, Ohio, USA. 14–17 August 2022. V007T07A052. Paper No: DETC2022-90320. doi: https://doi.org/10.1115/DETC2022-90320

[57] Xinyao Hu*, Chuang Luo, Hao Li, Liyao Jia, Chaoyang Song, Zheng Wang, and Xingda Qu (2019). “An Ankle Based Soft Active Orthotic Device Powered by Pneumatic Artificial Muscle.” IEEE International Conference on Real-time Computing and Robotics (RCAR), Irkutsk, Russia, 4-9 August 2019, pp. 374-378. doi: https://doi.org/10.1109/RCAR47638.2019.9043948

[58] Jianwen Luo, Sicong Liu, Chengyu Lin, Yong Zhou, Zixuan Fan, Zheng Wang, Chaoyang Song, H. Harry Asada, Chenglong Fu* (2020). “Mapping Human Muscle Force to Supernumerary Robotics Device for Overhead Task Assistance.” arXiv, doi: https://doi.org/10.48550/arXiv.2107.13799