Virtual Reality Robot Command Center

Virtual Reality Command Center

Foreword

My work has been featured in a publication presented at the Modelling and Simulation for Autonomous Systems (MESAS) Conference, held in October 2022 in Prague, Czech Republic. The primary author of this paper is Israel Lopez-Toledo, a colleague I worked closely with on the virtual reality application.

Introduction

In the contemporary landscape of military operations, the integration of robotic systems has become increasingly prevalent, presenting a paradigm shift in combat dynamics. This evolution is driven by the imperative to mitigate risks associated with deploying soldiers in an unpredictable environment. However, as technology advances, the deployment of semi-autonomous platforms introduces complexities in maintaining effective control and communication between human operators and these robotic systems.

As an intern at the Construction Engineering Research Laboratory, I took part in research endeavors to address these complexities by developing a modular user interface framework that improves users’ experiences and task completion with semi-autonomous robotic platforms. Specifically, this framework aims to expose Robot Operating System (ROS)-based robotic platforms over the network. The focus of this article is on one of the system’s front-ends, or client layers, built to interact with robotic platforms—a virtual reality (VR) workstation that centralizes the control of any robot and is customizable to maximize user efficiency.

The VR workstation, or robot command center, is mainly a demonstration of our framework’s capabilities. It showcases that users can execute autonomous routines or teleoperate a ROS-based platform from any device that has access to the robot’s network.

System BackEnD

I note that the following design has been proposed by Israel Lopez-Toledo. I did not partake in the development of the system’s backend. However, its discussion is warranted since the VR workstation achieves its functionalities as a client of the platform server (referred to below).

The system architecture consists of three layers: the Robot Layer, the Service Layer, and the Client Layer. The Robot Layer manages the platform's semi-autonomous behavior and low-level controllers, the Service Layer hosts web services exposing the platform to clients, and the Client Layer accommodates graphical user interfaces. Notably, the architecture supports any desired GUI, not limited to ones in VR, on any device, fostering flexibility and accessibility.

The Service Layer of the system comprises HTTP servers and web services hosted within the robotic platform that provide access to the robotic platform to external clients beyond the ROS environment. The platform server interacts with the platform manager as a ROS-service client. The workflow involves clients sending HTTP requests to the platform server, which, upon processing, initiates ROS-service requests to the platform manager, ultimately returning HTTP responses to clients with attached JSON data containing information obtained from the platform manager.


Implementation

The virtual reality interface requires the Unity engine, a common video game development platform. This engine was chosen for its ease of development and high compatibility with the HTC Vive headset. It also utilizes SteamVR which allows for VR support in Unity.

The VR application boasts several key features designed to enhance user control and situational awareness. In View 1, the camera stream feature provides two live feeds from the robotic platform's cameras. The retrieval of camera images is performed by a client listening to a WebSocket server that subscribes to the camera’s ROS topics. On the left, there is a monitor that displays the robot’s status, offering real-time insights into critical system information, including the current navigation mode, battery level, and emergency stop status. Here, the client is receiving a DiagnosticArray message from the platform server in a JSON format. Meanwhile, the indicator lights visually communicate potential issues with the robotic platform, such as network connectivity problems (green for connected, red for disconnected). The teleoperation feature allows users to manually operate the robotic platform using VR controllers or the arrow buttons. Pressing the arrow buttons sends an HTTP request over to the platform server, calling a ROS service that publishes directly to the command velocity topic.

Labelled VR Environment View 1

The code snippet below is for the camera stream. The camera image is downloaded and rendered as a texture of a 2D plane.

using socket.io;

public class QuadCreator1:MonoBehaviour {
    public void Start() {
        StartCoroutine(DownloadImage("http://10.2.1.1:8082/getcamera1"));
    }

    IEnumerator DownloadImage(string MediaUrl) {
        while (true) {
            Resources.UnloadUnusedAssets();
            using (UnityWebRequest request = UnityWebRequestTexture.GetTexture(MediaUrl)) {
                yield return request.SendWebRequest();
                if (request.isNetworkError || request.isHttpError)
                    Debug.Log(request.error);
                else
                    GetComponent<Renderer>().material.mainTexture = ((DownloadHandlerTexture)request.downloadHandler).texture;
                request.Dispose();
            }
        }
    }
}

Users can change the navigation mode to set the navigation behavior for the robotic platform. Likewise, this is accomplished via HTTP request. Lastly, the vehicle's current position is presented as a pin on a virtual tablet map. To generate these maps, a Python script consumes apriori satellite imagery and fuses it with live sensor data.

Labelled VR Environment View 2

The code snippet below is for one of the mode buttons.

public class Button1OnClick:MonoBehaviour
{
    public void OpenURL() {
        HttpWebRequest request = (HttpWebRequest) WebRequest.Create("http://192.168.131.48:8000/change_mode/mode=0");
        request.Method = "GET";
        var webResponse = request.GetResponse();
    }
    
    void OnMouseDown() {
        OpenURL();
    }
}

Results

In the evaluation of the proposed architecture, two field tests were conducted to assess the robustness and usability of the modular interface framework in simulated combat scenarios. The first test involved six combat engineers executing a scouting mission in a forest environment. Following training on the use of the VR robot command center and the robotic platform, the engineers successfully planned and executed autonomous tasks, gathering real-time data and scouting the target area remotely. The modular interface framework exhibited immediate responsiveness, with negligible lag in GUI interactions and near real-time sensor feedback. Usability was affirmed as soldiers, with minimal training, completed their mission smoothly and lauded the system's ease of use. The second test, conducted in a desert area with four combat engineers, mirrored the first test's success, emphasizing stable connections, real-time interactions, and positive user experiences. These field tests, conducted as part of large-scale US Army assessment events, substantiate the effectiveness and practicality of the developed GUIs and modular framework in supporting military operations.

Previous
Previous

Sensor Calibration Automation

Next
Next

Robot Web Command Center