Underwater Intention Recognition using Head Motion and Throat Vibration for Supernumerary Robotic Assistance


Yuqin Guo, Rongzheng Zhang, Wanghongjie Qiu, Harry Asada, Fang Wan, Chaoyang Song: Underwater Intention Recognition using Head Motion and Throat Vibration for Supernumerary Robotic Assistance. IEEE International Conference on Automation Science and Engineering (CASE), Auckland, New Zealand, 2023.

Abstract

This study presents a multi-modal mechanism for recognizing human intentions while diving underwater, aiming to achieve natural human-robot interactions through an underwater superlimb for diving assistance. The underwater environment severely limits the divers' capabilities in intention expression, which becomes more challenging when they intend to operate tools while keeping control of body postures in 3D with the various diving suits and gears. The current literature is limited in underwater intention recognition, impeding the development of intelligent wearable systems for human-robot interactions underwater. Here, we present a novel solution to simultaneously detect head motion and throat vibrations under the water in a compact, wearable design. Experiment results show that using machine learning algorithms, we achieved high performance in integrating these two modalities to translate human intentions to robot control commands for an underwater superlimb system. This study's results paved the way for future development in underwater intention recognition and underwater human-robot interactions with supernumerary support.

BibTeX (Download)

@conference{Guo2023UnderwaterIntention,
title = {Underwater Intention Recognition using Head Motion and Throat Vibration for Supernumerary Robotic Assistance},
author = {Yuqin Guo and Rongzheng Zhang and Wanghongjie Qiu and Harry Asada and Fang Wan and Chaoyang Song},
doi = {10.1109/CASE56687.2023.10260480},
year  = {2023},
date = {2023-06-26},
urldate = {2023-06-26},
booktitle = {IEEE International Conference on Automation Science and Engineering (CASE)},
address = {Auckland, New Zealand},
abstract = {This study presents a multi-modal mechanism for recognizing human intentions while diving underwater, aiming to achieve natural human-robot interactions through an underwater superlimb for diving assistance. The underwater environment severely limits the divers' capabilities in intention expression, which becomes more challenging when they intend to operate tools while keeping control of body postures in 3D with the various diving suits and gears. The current literature is limited in underwater intention recognition, impeding the development of intelligent wearable systems for human-robot interactions underwater. Here, we present a novel solution to simultaneously detect head motion and throat vibrations under the water in a compact, wearable design. Experiment results show that using machine learning algorithms, we achieved high performance in integrating these two modalities to translate human intentions to robot control commands for an underwater superlimb system. This study's results paved the way for future development in underwater intention recognition and underwater human-robot interactions with supernumerary support.},
keywords = {Best Healthcare Automation Paper Award, CASE, Corresponding Author, Paper Award},
pubstate = {published},
tppubtype = {conference}
}