A new system, developed by researchers
at MIT, could provide a more streamlined way for robots to
communicate with humans in difficult situations, including during
emergency response operations. The new model cuts down the necessary
communications made by a robot team member by 60 percent,
significantly reducing the barrage of data that its human counterpart has to deal with.
When autonomous robots communicate, they send each other constant updates on how the task is going, informing one another of every tiny development. But much of this data is superfluous to requirements, serving only to overcomplicate the task at hand. Furthermore, every time an update is relayed by one machine, all of its counterparts have to process the effect of the action on their current understanding of the state of the environment. The more information that's received, the more things can slow down.
Things are even more complex with the most up-to-date systems, with each robot – known as an agent – having to factor in the probability that the current accepted model of the situation is accurate, while also considering whether future actions will be successful. Such systems – known as decentralized partially observable Markov decision processes, or Dec-POMDPs – are hampered by the sheer volume of information that's constantly being relayed between the different agents.
The MIT team set out to make things simpler, streamlining the system to cut down the number of communications by 60 percent. Working towards the goal of building a system well-suited to emergency-response situations, they first removed any prior knowledge of the agents' immediate environment from the model.
While this was appropriate for the desired application (emergency responders often have little knowledge of the environment they're working in), it did mean that in order to make the system work, the researchers had no choice but to also remove the part of the system that examines the uncertainty of each actions' effectiveness. Instead, the new system assumes that any action that's attempted is completed successfully.
Rather than having to relay every single piece of information, the new system allows agents to pick from three options every time something new occurs, with the choice of ignoring the new info, using it itself but not broadcasting it to other agents, or both using and broadcasting the data.
Each option has pros and cons, and it's the agents' responsibility to determine the value of each and every decision, performing constant cost-benefit analyses based on its own actions and the expected actions of its counterparts.
The team tested the streamlined model using more than 300 computer-simulated rescue tasks. The results of the robot-only study weren't actually entirely positive, with the standard, constant-communication method actually have a better task completion rate – between 2 and 10 percent higher than the new, reduced communication system.
However, the researchers still have a lot of faith in their new model, believing that the tests don't best represent its usefulness.
"What I'd be willing to bet, although we have to wait until we do the human-subject experiments, is that the human-robot team will fail miserably if the system is just telling the person all sorts of spurious information all the time," said paper co-author Julie Shah. "For human-robot teams, I think that this algorithm is going to make the difference between a team that can function effectively versus a team that just plain can't."
Looking forward, the researchers plan to test the method with a team that includes both robot and human agents. They've also conducted the experiments with all human agents, using the data gathered to better understand how a human communicates in such a situation, and using the results to improve the system for the upcoming human-robot tests.
The findings of the project were presented at the annual meeting of the Association for the Advancement of Artificial Intelligence Laboratory (CSAIL) last weekend.