Loading...
Loading...
Loading...

eWEAR: Optical tactile sensor to improve robotic performance

Meeting Reports

|

Aug 11, 2022

However, it remains challenging for robots to perform dexterous manipulation tasks, such as moving small objects, that could reduce their impact for uses in manufacturing and to assist humans with tasks that require grasping. Hardware improvements such as sensors and visual algorithms have achieved some advances toward mimicking human dexterity. However, the lack of tactile feedback with sufficiently high resolution, accuracy, and dexterity is hindering greater use of robots.

Professor Monroe Kennedy III and his research group in the Department of Mechanical Engineering at Stanford University explains, “Humans are very good at sensing the shape and forces of objects they are holding between their fingers with high resolution. Humans achieve this with mechanoreceptors, cells that sense local pressure, embedded in the skin. Traditionally, biomimicry for tactile sensing has been achieved through physical transduction (fingertips are compressed, and that is directly converted to an electrical signal) or visionbased sensing (deformation is observed and correlated to change in shape or applied forces).”

In a recent paper posted in Arxiv, appearing in IEEE International Conference on Robotics and Automation 2022, Prof. Kennedy and Ph.D. student Wonkyung Do, research a novel visionbased solution. Kennedy says, “Vision-based sensing has demonstrated the ability to sense at higher resolutions with multi-model sensing compared to most physical transduction methods. But with most available vision-based sensors, robots still find it very challenging to perform dexterous tasks and to take what was learned from one task and apply it to a similar manipulation task.”

Their solution, named DenseTact, combines a vision sensor using an inexpensive fisheye lens camera with a soft elastomer cover that serves as the contact surface (Figure 1). The interior of the sensor cover is illuminated, which allows for estimating its shape. Feedback on shape is based on the deformation of the interior of the cover captured with a single image, which is then used to construct a model of the object being grasped. The sensor can be used in applications including in-hand pose estimation of a held object. “Our major findings from this work were developing the DenseTact sensor that can predict for previously unseen objects the depth of each point sensed by the sensor through camera pixels (570 x 570 pixels per image) for 1000 images with an average accuracy of 0.28 mm,” says Kennedy.

Read the full article

However, it remains challenging for robots to perform dexterous manipulation tasks, such as moving small objects, that could reduce their impact for uses in manufacturing and to assist humans with tasks that require grasping. Hardware improvements such as sensors and visual algorithms have achieved some advances toward mimicking human dexterity. However, the lack of tactile feedback with sufficiently high resolution, accuracy, and dexterity is hindering greater use of robots.

Professor Monroe Kennedy III and his research group in the Department of Mechanical Engineering at Stanford University explains, “Humans are very good at sensing the shape and forces of objects they are holding between their fingers with high resolution. Humans achieve this with mechanoreceptors, cells that sense local pressure, embedded in the skin. Traditionally, biomimicry for tactile sensing has been achieved through physical transduction (fingertips are compressed, and that is directly converted to an electrical signal) or visionbased sensing (deformation is observed and correlated to change in shape or applied forces).”

In a recent paper posted in Arxiv, appearing in IEEE International Conference on Robotics and Automation 2022, Prof. Kennedy and Ph.D. student Wonkyung Do, research a novel visionbased solution. Kennedy says, “Vision-based sensing has demonstrated the ability to sense at higher resolutions with multi-model sensing compared to most physical transduction methods. But with most available vision-based sensors, robots still find it very challenging to perform dexterous tasks and to take what was learned from one task and apply it to a similar manipulation task.”

Their solution, named DenseTact, combines a vision sensor using an inexpensive fisheye lens camera with a soft elastomer cover that serves as the contact surface (Figure 1). The interior of the sensor cover is illuminated, which allows for estimating its shape. Feedback on shape is based on the deformation of the interior of the cover captured with a single image, which is then used to construct a model of the object being grasped. The sensor can be used in applications including in-hand pose estimation of a held object. “Our major findings from this work were developing the DenseTact sensor that can predict for previously unseen objects the depth of each point sensed by the sensor through camera pixels (570 x 570 pixels per image) for 1000 images with an average accuracy of 0.28 mm,” says Kennedy.

Read the full article

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Cornerstone Partnerships

Frontier Labs

Documentary

Loading...

AI Prize

Chen Scholars Program

Training Programs

Stanford IPL

Loading...

AIAS 2025

Conference Program

Conference Partners

Conference Reports

About

Founders’ letter

Our Philanthropy

Vision

Team

Join Us

Newsroom

Chen Institute blog

Newsletter

Annual Report

© 2025 Tianqiao and Chrissy Chen Institute

Terms of Use

Privacy Policy

Contact us

Newsletter

Subscribe

We're Hiring!

Loading...

Cornerstone Partnerships

Frontier Labs

Documentary

Loading...

AI Prize

Chen Scholars Program

Training Programs

Stanford IPL

Loading...

AIAS 2025

Conference Program

Conference Partners

Conference Reports

About

Founders’ letter

Our Philanthropy

Vision

Team

Join Us

Newsroom

Chen Institute blog

Newsletter

Annual Report

© 2025 Tianqiao and Chrissy Chen Institute

Terms of Use

Privacy Policy

Contact us

Newsletter

Subscribe

We're Hiring!

Loading...

Cornerstone Partnerships

Frontier Labs

Documentary

Loading...

AI Prize

Chen Scholars Program

Training Programs

Stanford IPL

Loading...

AIAS 2025

Conference Program

Conference Partners

Conference Reports

About

Founders’ letter

Our Philanthropy

Vision

Team

Join Us

Newsroom

Chen Institute blog

Newsletter

Annual Report

© 2025 Tianqiao and Chrissy Chen Institute

Terms of Use

Privacy Policy

Contact us

Newsletter

Subscribe

We're Hiring!