Teaching Oral Care via Vision-based Deformable Perception


Chengxiao Dong, Xuanyi Dai, Yu Pan, Wanghongjie Qiu, Sen Li, Tianyu Wu, Yijie Jin, He Wang, Chaoyang Song, Fang Wan: Teaching Oral Care via Vision-based Deformable Perception. In: Soft Science, vol. 5, no. 3, pp. 36, 2025.

Abstract

This paper presents a novel, cost-effective sensor platform based on Vision-based Deformable Perception (VBDeformP) for community oral health education. The system integrates a 3D-printed thermoplastic polyurethane (TPU) soft structure with a rigid resin frame and an ArUco marker to encode six-dimensional force and torque information. By transforming force estimation into a marker-based pose tracking problem, the VBDeformP sensor achieves accurate and robust force/torque inference under quasi-static and dynamic conditions using machine learning models. An adaptive image binarization algorithm extends reliable marker detection across a wide illumination range (10–5,000 lux), ensuring consistent performance in realistic dental scenarios. Experimental validation involving 10 healthy participants performing standardized brushing tasks demonstrated that the sensor attains measurement accuracies comparable to a commercial ATI Axia80-M20 sensor, with mean absolute errors of 0.55 N (2.19% relative error) and 0.067 Nm (2.68% relative error) for quasi-static forces and torques, and 0.16 N (4.10% relative error) and 0.023 Nm (5.75% relative error) under dynamic conditions. Moreover, the system's real-time brushing region classification algorithm achieved an overall accuracy of 98.12%, further highlighting its potential to deliver immediate, personalized oral hygiene guidance. Its low cost, rapid initialization, portability, and scalable fabrication render it a promising solution for enhancing oral health education in community settings.

BibTeX (Download)

@article{Dong2025TeachingOral,
title = {Teaching Oral Care via Vision-based Deformable Perception},
author = {Chengxiao Dong and Xuanyi Dai and Yu Pan and Wanghongjie Qiu and Sen Li and Tianyu Wu and Yijie Jin and He Wang and Chaoyang Song and Fang Wan},
url = {https://github.com/ancorasir/VBDeformP4OralCare},
doi = {10.20517/ss.2025.14},
year  = {2025},
date = {2025-07-30},
urldate = {2025-07-30},
journal = {Soft Science},
volume = {5},
number = {3},
pages = {36},
abstract = {This paper presents a novel, cost-effective sensor platform based on Vision-based Deformable Perception (VBDeformP) for community oral health education. The system integrates a 3D-printed thermoplastic polyurethane (TPU) soft structure with a rigid resin frame and an ArUco marker to encode six-dimensional force and torque information. By transforming force estimation into a marker-based pose tracking problem, the VBDeformP sensor achieves accurate and robust force/torque inference under quasi-static and dynamic conditions using machine learning models. An adaptive image binarization algorithm extends reliable marker detection across a wide illumination range (10–5,000 lux), ensuring consistent performance in realistic dental scenarios. Experimental validation involving 10 healthy participants performing standardized brushing tasks demonstrated that the sensor attains measurement accuracies comparable to a commercial ATI Axia80-M20 sensor, with mean absolute errors of 0.55 N (2.19% relative error) and 0.067 Nm (2.68% relative error) for quasi-static forces and torques, and 0.16 N (4.10% relative error) and 0.023 Nm (5.75% relative error) under dynamic conditions. Moreover, the system's real-time brushing region classification algorithm achieved an overall accuracy of 98.12%, further highlighting its potential to deliver immediate, personalized oral hygiene guidance. Its low cost, rapid initialization, portability, and scalable fabrication render it a promising solution for enhancing oral health education in community settings. },
keywords = {Authorship - Corresponding, JCR Q1, Jour - Soft Sci. (SS)},
pubstate = {published},
tppubtype = {article}
}