Every robot is trained in some way to perform a task, usually in a simulated fashion. By watching what to do, the robot can mimic the task. But they do it without thinking, perhaps relying on sensors to try to reduce the risk of a collision, but not understanding why they are doing this task, or really being aware of where they are in physical space. This means that they will often make mistakes—for example, with their arms hitting obstacles—while humans don’t, because humans can compensate for changes.
“It’s a very important human ability, and we usually take it for granted,” said Boyuan Chen of Duke University.
“I’ve spent a long time trying to get machines to understand what they are, not by being programmed to assemble cars, but by thinking about themselves,” said study co-author Hod Lipson of Columbia University.
Lipson, Boyuan Chen, and colleagues tried to do this by placing a robotic arm in the lab with four cameras around it and a camera above it. The cameras feed video images back to a deep neural network connected to the robot — a form of artificial intelligence (AI) that monitors the robot’s movements within the space.
For up to 3 hours, the robotic arm twisted randomly. The neural network described above is fed information about the mechanical movements of the arm and sees how the arm responds by seeing where it moves. This process yielded 7888 data points. By simulating the robotic arm in a virtual environment, the team generated an additional 10,000 data points. To examine how well the AI learned to predict the position of the robotic arm, the team generated a cloud map showing where the AI ”thinks” the moving robotic arm should appear. The predictions were accurate to within 1 percent, meaning that if the test space was one meter wide, the system correctly estimated the arm position to within 1 centimeter.
If the neural network is considered part of the robot itself, it suggests the robot has the ability to know its physical location at any given moment, the report said.
“In my opinion, this is the first time in robotics history that a robot has been able to create a mental model of itself,” Lipson said. “It’s a small step, but it bodes well for the future.”
In their research paper, the researchers say their robotic system has a “three-dimensional self-awareness” when planning actions. Lipson believes that robots are still 20 to 30 years away from having a more general sense of human self-awareness. Full self-awareness will take scientists a long time to achieve, says Chen Boyuan. “I wouldn’t say the robot has (full) self-awareness,” he said.
Others were more cautious—even skeptical of the paper’s claims about three-dimensional self-awareness. “Further research building on this approach—rather than self-awareness—has the potential to yield useful applications,” said Andrew Hunter of the Georgia Institute of Technology.
David Cameron of the University of Sheffield, UK, says that without self-awareness, robots can easily follow a prescribed path to reach their goals. He also said: “The trajectory of the robot simulating its movement towards the target is an important step towards creating something like self-awareness.”
However, Cameron is unsure about the information released by Lipson, Chen Boyuan and their colleagues if the neural network-equipped robot is moved to an entirely new location and has to continuously “learn” to adjust its movements to respond to new obstacles Correction, then whether this self-perception persists. “Continued self-modelling while moving will be the next big step towards self-aware robots,” he said.
GIPHY App Key not set. Please check settings