X
Innovation

Robots are learning workplace etiquette at MIT

At a time when social distancing has become the norm, MIT researchers are teaching robots how to work more effectively alongside humans.
Written by Esther Shein, Contributor
human-robot-team-performed-a-meal-preparation-task-in-a-simulated-kitchen-environment.jpg

MIT human robot team perform a meal preparation task in a simulated kitchen environment.

Image: MIT

Workplace etiquette is not something normally associated with robots, but a new MIT project aims to teach robots social etiquette to more effectively work alongside humans. In the current COVID-19 landscape where it's not safe for human workers to interact with one another, imbuing robots with this sort of information could speed deployment into workplaces to assist with tasks that people would otherwise need colleagues for, according to MIT.

While humans learn workplace etiquette on the job, it's of course, a much different value proposition for robots. The machines cannot develop these skills by simply asking questions and understanding commands like the AI in Siri or Alexa, according to researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). Robots need to know when it's appropriate to communicate, and if they should even be talking at all.

SEE: New robot developed at Stanford changes shape like a 'Transformer' (TechRepublic)

Rather than telling robots exactly when and how to communicate, the researchers have developed a new framework called CommPlan, which gives the machines a few high-level principles for good etiquette, according to CSAIL.

The CommPlan framework "decides if, when, and what to communicate during human-robot collaboration," the researchers wrote. Then it's up to the robot to make decisions that would allow it to finish the task as efficiently as possible. CommPlan uses learning and planning algorithms to do real-time cost-benefit analyses on its decisions, CSAIL said.

"For example, will asking the human a question save time by making sure the robot doesn't do the wrong thing, or will it slow down the human from doing what they need to do? The robot might weigh a combination of factors, such as whether the human is busy or likely to respond given past behavior."

The team experimented with approaches in a kitchen scenario involving tasks such as assembling ingredients, wrapping sandwiches and pouring juice. The tests revealed the human-robot teams performed more safely and efficiently using CommPlan, CSAIL said.

Vaibhav Unhelkar, a CSAIL PhD graduate who co-authored the paper about the framework, said in a statement that he is encouraged by the success of CommPlan, because such policies require significant time, effort, and expertise on the part of programmers. "CommPlan combines the power of human experts and algorithms to create policies that are better, and at the same time, require reduced developer effort," Unhelkar said.

Additionally, the researchers said that a handcrafted policy relies on 'cut-and-dried rules', making it more likely to suffer from hiccups like over communication.

"Many of these handcrafted policies are kind of like having a coworker who keeps bugging you on Slack, or a micromanaging boss who repeatedly asks you how much progress you've made," said MIT graduate student Shen Li, another of the paper's authors, in a statement. "If you're a first responder in an emergency situation, excessive communication from a colleague might distract you from your primary task."

The researchers said the positive performance results from CommPlan indicate that it is "ripe for applications in other domains, from healthcare and hospital deliveries to aerospace and manufacturing." While the team has so far only used the framework for spoken language, they say it could also be applied to visual gestures, augmented reality systems, and other approaches, according to CSAIL.

"This work is exciting because it reasons about what the human needs from the robot, and the robot is explicitly trying to communicate just the right amount," said Brown University professor Stefanie Tellex, in a statement. 

Tellex, who was not involved in the research, added that "This will enable robots to be more sensitive and responsive to human needs and hopefully make them more helpful to people."

Editorial standards