If you happen to keep up with the latest trends in robotics and industrial automation, then terms like teleoperation, telepresence, and teleoperated robots probably sound familiar. Teleoperated robots are remotely controlled robots, they might have some sort of artificial intelligence, but normally they take their command from a human operator and execute exactly as instructed.
Right now, teleoperated robots are mostly used in medical surgeries and military operations. Critical surgeries are made easier with teleoperated robotic arms or tools due to their ability to reach the tightest places where human hands can’t operate. In military operations, teleoperated robots help to gather Intel and perform dangerous tasks like diffusing or moving an explosive.
Until recently, these teleoperated robots used to be controlled by a joystick style setup or console-like controllers, pretty similar to what you have on your PlayStation, Xbox or Wii consoles. With advanced Virtual Reality and Augmented Reality technologies, teleoperated robots are entering a new spectrum – VR and AR controlled teleoperated robots.
What is VR and AR, and how are they utilized with Teleoperated Robots?
Primitive versions of Virtual Reality and Augmented Reality emerged long ago, but lately, through modern technology, various smartphone apps and standalone consumer hardware, VR and AR are available to just about anyone. Recently, MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) came up with a VR-based controller for Teleoperated Robots using an Oculus Rift headset. Oculus, which was acquired by Facebook in 2014, has become one of the industry leaders in AR/VR technology.
The system prototype that MIT’s CSAIL created, works by receiving data input from various sensors placed across a room. These sensors generate environment data for better operation of the robots. Wearing a VR headset, the human operator can see through the robot’s eyes and make movements which the robot will mock. However, MIT has created two separate models for interacting with the robot – the direct model and the cyber-physical model.
In the direct model, the user sees what the robot sees through the VR helmet. This method provides more attachment with the robot as the human feels exactly like being inside the robot himself. However, ping between a VR controller and the robot is still significantly high and the lag could often cause nausea for the operator.