Robot vision and touch: Bringing robots to their senses

Robot vision and touch: Bringing robots to their senses

By Ryan LaRanger

Advancements in machine sensing have the potential to significantly increase the applications of machines by allowing them to work around people more safely and to adapt to changes in their environment or tasks without explicit input from a technician. By freeing humans from increasingly complex forms of labor while reducing the imposition of switching costs, companies following agile manufacturing practices may be able to use robots sporting a full suite of senses for an enormous array of tasks.

In this article, we discuss some of the latest technologies for robot vision and touch. From LiDAR to force sensors and simulated skin, these technologies are imparting machines with human-like capabilities that are allowing their human coworkers to be safer on the job while engaging in more meaningful work.

Machine vision:

Adapting to a changing environment, operating safely around people, and task-switching based on visual input is enormously helped by vision. Sensors that permit 3D machine vision are presently bulky and expensive. Small, inexpensive, and effective machine vision systems will permit the creation of robots that can work in much more dynamic environments while adapting their actions to fit the task at hand.

Machine vision is particularly important from a safety perspective; a robot that can move about a factory, construction site, or street must be able to avoid obstacles and people with high precision. Beyond obstacle avoidance, efficient 3D vision will permit machines to be trained using visual inputs from a human operator demonstrating tasks. Eventually, these robots will be trainable for an enormous range of actions by people who are domain experts themselves without the need for third-party engineer intervention.

LiDAR:

Light detection and ranging (LiDAR) is very similar to sonar, but instead of sound it employs rapidly pulsed laser light, which permits a machine to “see” objects in 3D space. This capability can be used by machines to make robust 3D maps of their environment and identify objects in real time. While advances in this technology are some of the primary drivers in the autonomous car domain, current LiDAR systems are broadly considered to be bulky, sensitive to disruption, and expensive.

Solid-state LiDAR:

Leddar Tech is developing solid-state LiDAR that is resistant to mechanical disruption, which can cause significant errors in traditional LiDAR systems. Importantly, their technology provides the same or better levels of sensitivity as other systems that use expensive lasers and mirrors for tasks such as accurate time-of-flight measurements and clear signal-to-noise ratios using inexpensive LEDs. Their hardware and software algorithms permit a high sampling rate and may provide highly efficient machine vision for industrial robots that are subjected to difficult environments or occasional jostling.

The cameras made by Leddar Tech are relatively small and are more sensitive relative to traditional LiDAR systems; however, Leddar’s cameras currently have a narrow field of view. Depending on the application they would potentially require the use of multiple devices to achieve a sufficiently broad field, depending on the application. These cameras are currently being investigated for use in self-driving cars but could easily be adopted to a wide array of applications, particularly because of their small size, robustness, and relatively low price point.

Chip-based LiDAR:

The Photonics Microsystems Group at MIT is working to dramatically miniaturize LiDAR systems by integrating them onto microchips. These chips can be produced in commercial CMOS foundries on standard 300-millimeter wafers, potentially making their unit production cost about USD 10. This chip has some limitations, since the current steering range of the beam is about 51 degrees, and it cannot create a 360-degree image by itself. Currently, their chips can only detect objects at 2 meters, but they are working on chips with a range of 100 meters.

microdisk laser

MIT Photonic Microsystems Group’s microdisk laser
Source: Research Laboratory of Electronics at MIT

Because of their small size and relatively inexpensive manufacturing costs, these chips have the potential to provide for the inclusion of multiple LiDAR sensors on a single device and expand machine vision applications to even basic consumer-facing robots. Inexpensive 360-degree vision achieved with arrays of these chips for robots would offer safe and effective collision avoidance, responsiveness to human gestures, and more adaptable designs.

The Takashima Lab at the University of Arizona is another group working on miniaturization of LiDAR systems. Laser beam steering is a critical component of LiDAR image reconstruction and analysis, which normally contributes significantly to the bulk, expense, and fragility of LiDAR devices. At the SPIE Opto 2018 meeting, J. Rodriguez et al. demonstrated a small and inexpensive 3D-printed LiDAR detection system on a chip.

While some groups are exploring micro-electromechanical systems for LiDAR beam steering, this group has developed a digital micromirror device that is relatively small and provides an improved field of view relative to current LiDAR systems (48 degrees instead of 36 degrees) and a large beam size that is on par with existing LiDAR systems. While the present limitation of this approach is a reduced number of scanning points, the Takashima Lab and others are developing a multi-laser diode detector which may overcome this issue. Overall, this strategy shows some promise, with a number of devices showing moderate range despite the low cost and the ease of manufacture.

Once developed and available, these chip-based LiDAR systems may be ideal for a suite of short-distance applications such as the detection of nearby obstacles and visually identifying objects to grab or manipulate. For example, robots with these sensors could be used to assemble or disassemble complex machines and identify objects by sight in shipping fulfillment centers, or these chips could be used in miniaturized pipeline inspection robots.

Liquid lens autofocusing of light:

Liquid lens–based autofocusing of light could facilitate robust real-time control of the light used in LiDAR sensing. This topic has been explored by research groups such as the Gopinath Lab at the University of Colorado, which has outlined the concept of using a weak electromagnetic current to manipulate the shape of a series of lenses. This technology is currently commercially available for other applications and is being sold by companies such as Cognex, which provides off-the-shelf tunable liquid lenses for directing and concentrating lasers. 

These lens systems are mechanically robust as they do not require the movement of physical parts to direct the laser path. The system is also relatively inexpensive for new application development, as it is already in production. 

These factors will potentially make this technology ideal for LiDAR applications, particularly in cases where the robot must be able to rapidly change the focus of the objective.

Force-feedback sensing:

Force-feedback sensing is critically important for work; it provides clues as to the strength and identity of an object and a mechanism to optimize the applied force. For example, it prevents us from smashing keyboards whenever we type and crushing fruit when we pick it. Machines capable of broad force sensing through a skin equivalent, as opposed to force sensing through a few joints, will make robots capable of handling a much broader array of tasks while adapting to the demands of an uncertain environment.

Past advancements in this field have been focused primarily on rotor sensors. Further improvement in dynamic force sensing on the surface of a robot, particularly its manipulators, will allow them to detect unexpected resistance. This could offer benefits in a huge number of fields:

  • Robots in industrial engineering could sense the material strength of objects and adjust accordingly to prevent stressing the material. 
  • In medicine, robots could sense skin resistance and apply gentle pressure when required in surgery or while helping patients move. 
  • In construction and inspection, robots could sense material weakness in structures and effect a local repair response as needed. 
  • In agriculture, robots could safely pick produce without destroying it. Developments in this space will likely center on the creation of broad, soft, skin-like force sensors that can provide context and location queues to robots.

Force sensing using ink technologies:

The company Tekscan is developing tactile pressure- and force-measurement sensors that are designed to be thin and durable. Further, these sensors are capable of registering their location in 3D space with high accuracy. Tekscan’s primary innovation is the creation of thin and durable force-sensitive ink. Because these sensors are so small, they can be custom designed to meet the requirements of objects of effectively any size or shape. The sensors are also relatively cheap to manufacture.

Peratech is another company working on force-sensing ink. They have developed a thin quantum tunneling composite material mixed with a polymer that is screen-printable. This composite changes its electrical resistance based on changes in applied force. The resulting sensors are thin (200-um profile), can sense multiple touch points in parallel, and can detect 10g of force. Like the Tekscan force sensors, Peratech’s quantum tunneling composite sensors are flexible, are capable of being printed on almost any surface, and can sense force through most standard materials.

Simulated skin force sensors:

Force-sensing systems that simulate skin should be soft, resistant, and capable of sensing pressure anywhere on their surface. This innovation, once commercialized, could make robots much more responsive to their surroundings and more capable of collaborating with humans. The designer would no longer need to anticipate sensor placement when designing a robot; instead, the entire working surface of the robot could act as a soft sensor. These sensors will be particularly valuable in fields demanding high precision, adaptability, or a soft touch, such as medicine, construction, and high-precision manufacturing.

There are a number of laboratories working on this technology, though no one is yet commercializing it. One group working on this is the Soft Robotics and Bionics Lab at Carnegie Mellon University. They have recently developed an artificial skin system consisting of a highly stretchable silicone elastomer filled with conductive liquids capable of detecting multiple axes of strain and shear forces.

Conclusions:

This next generation of robotic sensing will permit robots to perform a significantly broader array of tasks. Many of these next-generation robots are already being developed in fields such as manufacturing, construction, infrastructure, and medicine. Further advances in robotics will disrupt these industries by allowing for dramatically improved productivity in spaces that have traditionally been ill suited to automation.

This excerpt was taken from our second Disruptors report titled “Disruption in Human Robot Collaboration.“ The full report can be viewed here.

If you have any questions or would like to know if we can help your business with its innovation challenges, please contact us here or email us at solutions@prescouter.com.

Never miss an insight

Get insights delivered right to your inbox

More of Our Insights & Work

Never miss an insight

Get insights delivered right to your inbox

You have successfully subscribed to our newsletter.

Too many subscribe attempts for this email address.

*