skip to content

E-skins, machine learning and soft robots

3 min read

Recently, I read a review paper Electronic skins and machine learning for intelligent soft robots1.

Design and fabrication of integrated e-skins

Biological skin contains receptor networks that can detect various stimuli, such as vibration, humidity, and temperature. Several studies on e-skin sensor arrays focused on the classification of a single type of information, such as force, shape, or direction of motion. The next generation of e-skins should integrate multimodal sensor arrays to capture richer sensory information than their predecessors.

Machine learning for soft e-skins

As e-skins increase in resolution, their signals could be processed to detect higher-order deformation modes and higher-level notions about the environment, such as material type. However, obtaining this information requires algorithms that can extract useful information from large quantities of data.

OrbTouch2

Points for reference:

  • Information theoretic analysis of sensor signals

    evaluate the information content by computing the Shannon entropy H(z)=i=1np(zi)log2(p(zi))\displaystyle H(z)=\sum_{i=1}^n p\left(z_i\right) \log _2\left(p\left(z_i\right)\right) and mutual information I(z,y)=i=1nj=1np(zi,yj)log2(p(zi,yj)p(zi)p(yj))\displaystyle I(z, y)=\sum_{i=1}^n \sum_{j=1}^n p\left(z_i, y_j\right) \log _2\left(\frac{p\left(z_i, y_j\right)}{p\left(z_i\right) p\left(y_j\right)}\right)

    The high persensor entropy in our gesture recognition data (2.71 bits), though, is a promising step toward being able to encode large interesting vocabularies using deformable interfaces with high-density sensor arrays.

  • For gesture recognition, we use an inference model based on a 3D-CNN (F1)\left(F_1\right), to map a queue of mm sensor images, z0:z9z_0: z_9, to a categorical probability distribution, pcp_c, over ncn_c gesture classes (R5×5×10Rnc)\left(\mathbb{R}^{5 \times 5 \times 10} \rightarrow \mathbb{R}^{n_c}\right). We use F1F_1 to identify gestures, and also to discriminate between users performing the same gesture. For touch localization, we use a regression model ( F2F_2 ) that uses 2D convolutions, which map sensor readings, zz, from one time step to a continuous dd-dimensional space ( R5×5Rd\mathbb{R}^{5 \times 5} \rightarrow \mathbb{R}^d ). We use F2F_2 to estimate touch location on the curvilinear surface (i.e., d=2d=2 ); however, it could also be used to estimate membrane deflection, touch pressure, or other continuous quantities.

  • Generality (easy to train): We teach OrbTouch new inputs by pressing the label button, located adjacent to the orb (Fig. 2a), in unison with

    the imparted gesture. The label button is connected to the I/O

    interface on the RBPI3 computer, and its state is logged at

    every time step.

Intelligent Soft Gripper Configured with Multimodal Sensor3

Points for reference:

  • For some grasping tasks, such as soft and fragile objects that are prone to either breakage or deformation, the control of grasping behavior becomes crucial to prevent a failed grasp and overcompressing the objects. Therefore, the development of a controllable soft robotic system is highly valuable for performing the high-level tasks of cognition and interaction and even for making complex decisions to specific attributes of an object (e.g., contact force control based on its mechanical properties).
  • Shape recognition of a soft gripper based on multimodal sensing. Monomodal(triboelectric) is not enough, some classification errors happen for the objects with the same shape. To overcome these issues and further enhance the recognition capability, multimodal sensing provides a promising approach to enable a robust sensory system by combining triboelectric data with pressure sensor information.
image-20251206045732347

Footnotes

  1. B. Shih et al., “Electronic skins and machine learning for intelligent soft robots,” Sci. Robot., vol. 5, no. 41, p. eaaz9239, Apr. 2020, doi: 10.1126/scirobotics.aaz9239.

  2. C. Larson, J. Spjut, R. Knepper, and R. Shepherd, “A Deformable Interface for Human Touch Recognition Using Stretchable Carbon Nanotube Dielectric Elastomer Sensors and Deep Neural Networks,” Soft Robotics, vol. 6, no. 5, pp. 611–620, Oct. 2019, doi: 10.1089/soro.2018.0086.

  3. T. Wang et al., “Multimodal Sensors Enabled Autonomous Soft Robotic System with Self-Adaptive Manipulation,” ACS Nano, vol. 18, no. 14, pp. 9980–9996, Apr. 2024, doi: 10.1021/acsnano.3c11281.