Why 'Cobot' Robots Are Coming To Workplace Near You

TREND ANALYSIS: The next wave of robots is coming and these will be ones that can be thought of as collaborative robots, or “cobots.” These are machines that can perform everyday tasks to make our lives better.

Nvidia.robot.Zeus11619

The concept of robots being infused into our daily lives gives many of us the heebie jeebies as visions of Terminators or Cylons on the hunt for humanity come to mind. Hollywood certainly hasn’t done robot-kind any favors, but in actuality, robots have existed among us for quite some time.

Visit any assembly line or factory floor and there are robots performing repetitive tasks where it’s more accurate and cost-effective to use machines than people.

The next wave of robots is coming and these will be ones that can be thought of as collaborative robots, or “cobots.” These are machines that can perform everyday tasks to make our lives better. For example, an autonomous robot can be used to shave a quadriplegic, pick vegetables on a farm or deliver items to our home. In fact, if one looks across the job stats as to where job shortages are, robots could fill many of them. They can take care of the elderly 24x7, work through the night and be more precise than people.

Getting Robots to Work Correctly a Major IT Challenge

The reason I love the topic of autonomous machines is that getting them to work right is the toughest technology problem out there. There a lot of focus currently on self-driving cars, particularly with CES being held last week, but those are child’s play compared to robots. 

Cars have very well-defined rules to follow; there are lines on the road to guide them and signs along the way telling them when to stop and go. Cobots require a wide range of technology, including deep learning, natural language processing, fast network connectivity, GPUs, cameras, sensors, translation capabilities and more–and success depends on all of this stuff working together.

They also require an immense amount of training to do even simple tasks. For example, the task of picking up an object up varies greatly by what it is. Some objects, like a cup, have a handle on the side, and others–like a paint can–would have one on top.  Also, a robot that picks up a banana would need to do so more gently than if it were a rock. This means the robot needs to be trained to recognize objects and then alter its behavior based on what it is.  

Building and training robots is extremely difficult. To accelerate the development of these autonomous machines, GPU chipmaker NVIDIA recently opened a new robotics research lab in Seattle near the University of Washington. The aim of the facility, led by Dieter Fox, senior director of robotics research at NVIDIA and a professor in the UW Paul Allen School of Computer Science and Engineering, is to drive breakthrough robotic research to enable the next generation of robots that perform complex manipulation tasks to safely work alongside people.

Research Group Working on Real-World Tasks

The 1,300-square-foot area currently houses about 50 people, including research scientists, faculty and student interns. It’s important to note that this isn’t pie-in-the-sky stuff; the research is tasked with solving real-world scenarios for interactive manipulation to transform industries.

The lab opened in November, but NVIDIA held an open house on Jan. 11 that included a number of press and analysts. At the event, one of these real-life challenges was demonstrated by having a mobile manipulator (robot) perform a number of kitchen tasks, such as retrieving objects from a cabinet and helping a person cook a meal. This small set of tasks required a number of leading-edge techniques to detect and track objects, keep track of the state of drawers and cabinet doors and manipulation of objects. These were as follows:

  • Dense articulated real time tracking (DART) uses depth cameras to keep track of a robots environment. DART is a general framework that tracks objects such as boxes, cans and coffee cups as well as articulated objects like tools, hands and other robot manipulators.
  • Pose-CNN: 6D pose estimation detects the orientation and pose of known objects. This is a critical capability for robots that pick up and move objects around. The problem is made more difficult with changes to lighting and clutter, which can add to the complexity. Pose-CNN is a neural network trained to detect objects using regular cameras.
  • Riemannian Motion Policies (RMPs) for reactive manipulator control. RMPs are mathematical algorithms that make it easy for developers to code fast and reactive controllers that use the detection and tracking information from Pose-CNN and DART to safely interact with objects and humans.
  • Photorealistic simulators that obey the laws of physics.  NVIDIA developed a tool called Issac that enables simulations to be done in a virtual world. Teaching a robot all the different possibilities of even just one action can be expensive and time-consuming.  For example, teaching a robot to walk requires building stairs, gravel roads, slippery floors, side hills, inclines and other scenarios.  Doing this in virtual environment enables simulations to be done in a fraction of the time.

The democratization of AI will quickly usher in the era of cobots, and it’s important that business leaders consider how these might benefit their organizations.

This is no longer the stuff of science fiction, and in the near future you should expect to see these autonomous machines show up in the workplace and perform many of the tasks that people aren’t good at or do not want to do.

Zeus Kerravala is the founder and principal analyst with ZK Research. He spent 10 years at Yankee Group and prior to that held a number of corporate IT positions.