UPDATED 16:29 EDT / MAY 20 2019

AI

Facebook is teaching robots to walk, grasp and feel

Last year, Facebook Inc.’s FAIR artificial intelligence lab expanded the scope of its research by launching a series of robotics projects. The objective was to facilitate new kinds of experiments in which neural networks installed on autonomous machines can tackle real-world problems.

Today, Facebook Inc. provided the first public update on the effort in the form of three academic papers. Each describes a different project focused on developing new, more efficient methods of training AI models to handle unfamiliar situations. The company’s hope is that these methods will eventually prove useful for other tasks as well, such as improving its spam detection algorithms. 

“Robotics provides important opportunities for advancing artificial intelligence, because teaching machines to learn on their own in the physical world will help us develop more capable and flexible AI systems in other scenarios as well,” Facebook researchers Franziska Meier, Akshara Rai, Roberto Calandra wrote in a blog post.

The first of the projects Facebook detailed today focuses on mobility. FAIR researchers are using a six-legged robot nicknamed Daisy (pictured) to test an AI model that can learn to walk without any prior training. The software makes do only with data from Daisy’s sensors and a pre-programmed set of travel objectives.

Facebook’s long-term goal is to create a robot that can figure out how to navigate relatively difficult terrains such as sandy areas. That way, the company hopes to compress the AI training process from the months or weeks it takes now to just a few hours.

Facebook has made more progress toward that latter goal with its second project, which is a collaboration with New York University. FAIR has developed a method that allows a robotic arm to learn how to grasp an object after just a few dozen attempts, instead of the hundreds or thousands of tries it normally takes. This is made possible by a virtual reward system that incentives the AI to find and fill gaps in its knowledge.

“Although previous similar systems typically explore their environment randomly, ours does it in a structured manner, seeking to satisfy its curiosity by learning about its surroundings and thereby reducing model uncertainty,” the researchers wrote. “Our research has shown that seeking to resolve uncertainty can actually help the robot achieve a task even faster.”

The third and final project is aimed at enabling neural networks to learn through touch. FAIR, together with researchers from UC Berkeley, modified a model originally built for video processing to analyze data from a high-resolution tactile sensor. They then connected the AI to a robot in order to teach it a number of relatively complex actions.

The AI learned to roll a ball, move a joystick and identify the right face of a 20-sided die without being given any specific instructions. The researchers only provided the model with high-level task descriptions to set it on the right track.

Photo: Facebook

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU