AR, VR, And Other Ways Of Controlling Robots

By NAVEEN JOSHI

Despite the increase in the autonomy of robots everywhere, there are still certain situations where a human being would need to control one either manually or with the help of various mechanized tools. The amalgamation of AR, VR, and robotics promises greater control even in this age of autonomy.

Today’s robots are highly intelligent and autonomous in their operations. The near-continuous improvements in the field of robotics mean that such machines can now execute more than just mundane and repetitive tasks in organizations. As a result, there is a growing number of areas in which autonomous robots can be used to get critical work done. Glitch-free autonomous robots are a blessing for any organization as they reduce the need to have wage-taking, error-committing humans employed for tasks that a machine can do with much greater precision. While organizational robots may have their limitations for now, the future of technology certainly looks bright for businesses.

However, we haven’t reached the stage wherein robots are completely autonomous just yet. So, the need for human beings to control certain functionalities of robots still exists today. Additionally, there is a need for the tools or technologies used to control robots to be easy to operate to minimize the amount of training required for their users. There are more than a few tools that are used for controlling robots, and we will take a look at three of them here.

AR Applications to Control Robotics

Augmented Reality, or AR, is a technology that creates an interactive simulation on a screen in which real-world surroundings are superimposed with computer-generated imagery. The wildly famous Pokémon GO game is a good example of an AR application. In the game, AR, with assistance from your smartphone camera, is used to display cartoon creatures amidst real backgrounds on your phone screen. As the technology merely ‘augments’ graphic imagery on a real-world background (as opposed to VR), users do not necessarily need expensive consoles to enjoy AR-based applications.

There are multiple existing instances of AR being used to control the movement and actions of robots for various operations. In 2017, a New York University engineer named Jared Frank and his team created an AR-powered robot control interface that can be operated using a smartphone or tablet. Like Pokemon GO, the system uses the smartphone camera to capture the surroundings and overlay virtual objects in it. Using the app, users can tap or swipe on the screen to direct swarms of robots to move, lift and displace objects and perform similar tasks.

The app uses an Apple-created web development platform named Xcode to detect, monitor, and maneuver digitally connected robots that appear on the smartphone screen. The application allows users to manipulate several robots at one go via Wi-Fi signals. Specific reference tags on the robots as well as the objects that need to be moved by the machines allow users to visually correlate with their surroundings on their device screen.

Another live example of AR usage for robotic control is an application designed by Professor Billo and Daniel Riehm from the Notre Dame University.

This application necessitates the use of an AR headset to control the various software and hardware tools included in the package. The application was mainly researched and developed to control the movements and tasks of a six-axis universal robotic arm. The headset allows for objects around a user to be superimposed with 3D graphical objects. Users can pre-program the steps that need to be performed via the robotic arm and the AR application will ensure precision in the task. As stated earlier, users do not need to have in-depth technical knowledge of robotics or hold AI-related university degrees to use the application for robot control. Robotic control is achieved by the application through the recognition of the wearer’s hand gestures and head orientation.

This application uses a local TCP connection to send command signals to the robotic arm. The coordinates in the headset are calibrated in such a way that the user’s gestures can be replicated by the robotic arm. The arm can be deployed for a number of operations.

VR and Robotic Control

Virtual Reality, or VR, takes the make-believe abilities of AR to the next level. Quite simply, where AR creates a partial simulation that includes a person’s actual background, VR creates a completely made-up world through consoles and headsets. As a result, VR offers a more immersive experience to users while being slightly more expensive compared to AR applications as additional hardware components are also involved in the process. The amalgamation of VR and robotics offers a greater quality of performance.

VR tools used to control robots allow users to carry out the task remotely through the internet. As a result, from a convenience or multitasking point of view, VR offers much more than even the most advanced AR-based robotic control applications. A VR-based controller involves a few components apart from headsets, such as handheld motion controllers, to direct a robot effectively despite being at a completely different location to the machine. Additionally, with all the connected hardware components in hand, such tools allow users to ‘step into a robot’s ‘skin’ to control it in a more ‘hands-on’ way despite the distance between the two entities. One of the main requirements while using VR and robotics together is a strong data connection for uninterrupted functionality.

One such application was designed by students and researchers at Brown University, Rhode Island. VR-based systems are highly useful for operational situations and surroundings where the presence of humans is impractical or downright dangerous—such as a nuclear fusion reactor involving robots or at a site where explosive weapons are being assembled by mechanical robotic arms. In such situations, the VR-based controller will allow human operators to implement the processes perfectly by accurately making the robots perform various actions with minimum lag.

Brown University’s application uses a Baxter research bot coupled with an HTC Vive VR system. Apart from that application, VR-based controlling applications can also be used for controlling multiple robots at the same time. For now, the amalgamation of VR and robotics is fairly limited and undergoing developments at a rapid pace. As a result, many of these applications are still in a concept stage and it may take a few years until the combination of VR and robotics goes mainstream for daily business functions in organizations.

Brain Impulses for Robotic Control

One of the most futuristic concepts for controlling robots remotely is the use of brain signals for the purpose. Such a concept has been developed by the Massachusetts Institute of Technology. MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) division has created the project that “aims to let people correct robot mistakes with nothing more than their brain signals”.

The project, which intends to improve the existing human-robot interface, uses receptors to receive brain signals so that robotic operations can be carried out with minimum lag and the highest accuracy. Using brain impulses to create a “feedback loop”, the application enables robots to attain a higher degree of intuitiveness and, thereby, improve the quality and speed of business operations. The receptors attached to a user’s head can detect and absorb electroencephalogram (EEG) signals before converting them into followable commands for a robot.

Apart from speed and precision, effective human supervision is vital for robots to perform work operations. So, the project enhances the aspect of human intervention by combining two types of signals for robot control—EEG and EMG (Electromyography, muscle-generated signals). Once the signals are received, they are verified for errors and then interpreted for robot movement. This process was largely created to eliminate robot mistakes made during multiple-choice tasks.

MIT’s project focuses “not only on safety purposes (during the robot control operations), but to properly convey information, coordinate activities and minimize cost of collaborative operations.”

The system is designed and implemented, and the “results are presented in real-time closed-loop and open-loop experiments as well as offline analysis of both primary and secondary brain signals.”

As we have seen, the involvement of brain signal-based applications, AR, and VR in robotics come with their own set of advantages and disadvantages for effective controlling of robots. While AR is the cheapest of the three advanced methods and requires the least amount of hardware components, VR offers greater practicality for remote operations. MIT’s project is in an early stage of development and is among many that promise to add another layer of finesse to the way modern robots can be controlled.

LEAVE A REPLY

Please enter your comment!
Please enter your name here