Recently, I read a review paper Electronic skins and machine learning for intelligent soft robots1.
Design and fabrication of integrated e-skins
Biological skin contains receptor networks that can detect various stimuli, such as vibration, humidity, and temperature. Several studies on e-skin sensor arrays focused on the classification of a single type of information, such as force, shape, or direction of motion. The next generation of e-skins should integrate multimodal sensor arrays to capture richer sensory information than their predecessors.
Machine learning for soft e-skins
As e-skins increase in resolution, their signals could be processed to detect higher-order deformation modes and higher-level notions about the environment, such as material type. However, obtaining this information requires algorithms that can extract useful information from large quantities of data.
OrbTouch2
Points for reference:
-
Information theoretic analysis of sensor signals
evaluate the information content by computing the Shannon entropy and mutual information
The high persensor entropy in our gesture recognition data (2.71 bits), though, is a promising step toward being able to encode large interesting vocabularies using deformable interfaces with high-density sensor arrays.
-
For gesture recognition, we use an inference model based on a 3D-CNN , to map a queue of sensor images, , to a categorical probability distribution, , over gesture classes . We use to identify gestures, and also to discriminate between users performing the same gesture. For touch localization, we use a regression model ( ) that uses 2D convolutions, which map sensor readings, , from one time step to a continuous -dimensional space ( ). We use to estimate touch location on the curvilinear surface (i.e., ); however, it could also be used to estimate membrane deflection, touch pressure, or other continuous quantities.
-
Generality (easy to train): We teach OrbTouch new inputs by pressing the label button, located adjacent to the orb (Fig. 2a), in unison with
the imparted gesture. The label button is connected to the I/O
interface on the RBPI3 computer, and its state is logged at
every time step.
Intelligent Soft Gripper Configured with Multimodal Sensor3
Points for reference:
- For some grasping tasks, such as soft and fragile objects that are prone to either breakage or deformation, the control of grasping behavior becomes crucial to prevent a failed grasp and overcompressing the objects. Therefore, the development of a controllable soft robotic system is highly valuable for performing the high-level tasks of cognition and interaction and even for making complex decisions to specific attributes of an object (e.g., contact force control based on its mechanical properties).
- Shape recognition of a soft gripper based on multimodal sensing. Monomodal(triboelectric) is not enough, some classification errors happen for the objects with the same shape. To overcome these issues and further enhance the recognition capability, multimodal sensing provides a promising approach to enable a robust sensory system by combining triboelectric data with pressure sensor information.
Footnotes
-
B. Shih et al., “Electronic skins and machine learning for intelligent soft robots,” Sci. Robot., vol. 5, no. 41, p. eaaz9239, Apr. 2020, doi: 10.1126/scirobotics.aaz9239. ⤴
-
C. Larson, J. Spjut, R. Knepper, and R. Shepherd, “A Deformable Interface for Human Touch Recognition Using Stretchable Carbon Nanotube Dielectric Elastomer Sensors and Deep Neural Networks,” Soft Robotics, vol. 6, no. 5, pp. 611–620, Oct. 2019, doi: 10.1089/soro.2018.0086. ⤴
-
T. Wang et al., “Multimodal Sensors Enabled Autonomous Soft Robotic System with Self-Adaptive Manipulation,” ACS Nano, vol. 18, no. 14, pp. 9980–9996, Apr. 2024, doi: 10.1021/acsnano.3c11281. ⤴