IMU Pitch Calibration UI Page

Motivation

It’s easy to overlook sensor error and the minute inaccuracies of any measurement tool. However, disregarding these seemingly small errors has the potential to snowball into a number of unexpected problems that stem from the issue of getting bad data.

At the Construction Engineering Research Laboratory (CERL), sensor calibration wasn’t a major issue until researchers started observing that if they turned on the rover and left it sitting in the same place for half an hour, the measured yaw would change by 0.5 degrees. It may not seem like a lot, but that error was accumulating over time and causing a lot of the engineers’ navigation algorithms to fail. So, the cycle of manually calibrating IMUs, encoders, LiDARs, and cameras began. For some sensors, they only needed to be calibrated once every few months. For others, they needed to be calibrated nearly every week due to the high accuracy demanded by some of the rover’s capabilities. Hours would be spent calibrating sensors on the robot instead of testing new features that could support the main purpose of the rover — to survey geological landscapes that were unsafe or difficult for humans to traverse.

One of my responsibilities as an intern at CERL was to tackle how to efficiently calibrate the multitude of sensors mounted on our rover. The goal was to simplify the process so that less time and human effort would be expended on repetitive tasks that made the difference between the rover crashing into obstacles and the rover autonomously driving through rough terrain.

Design Requirements

Calibrating a sensor follows the generalized procedure of 1) collecting real world data, 2) comparing expected versus measured values, and 3) correcting the error in hardware or software. The proposed solution needs to simplify or automate these processes.

I started off with writing scripts that could communicate with existing ROS (Robot Operating System) nodes. This works well for users familiar with executing commands in a Unix shell but not for soldiers with little to no technical experience, the intended users of the rover. So, making a graphical user interface (GUI) was the best solution for running these calibration scripts. As with any GUI, the design should be easy to use and intuitive.

My solution needed to integrate well with ROS and the existing framework. Since I needed to subscribe and publish to other ROS nodes, I created a separate node for sensor calibration, which entailed using a tool for developing GUIs that could be placed inside a custom ROS package. I chose to work with KivyMD, a framework for graphical applications, because it integrated seamlessly with ROS.

In addition, many calibration processes required the rover to move. Hence, it was imperative to take into account the safety of people in proximity to the rover while it moved. Some of the calibration methods I discussed with my colleagues used teleoperation control and could not be made fully autonomous. I needed to provide some kind of feedback to the operator, so that the desired motion is carried through with minimal error.


Implementation

The images below are some of the pages in the sensor calibration user interface I developed. There are roughly 25 pages in total, so I did not include all of them here. I worked on this project with another intern for roughly three weeks. The three, white boxes at the top of some of the pages are oversized placeholders for live camera feeds from the rover. At the time I took these photos, I was unable to run bag files (pre-recorded data collected from a live session) on my personal work computer and have images published over to ROS topics; my computer had not been yet set up with the full software stack. Although not shown here, I was able to test the GUI on the rover and confirm that three camera feeds (one front and two side views) can be seen relaying with millisecond latency.

 
 

Along with the user interface, I wrote parts of the calibration scripts that would be run for the tuning of the wheel parameters, gyrocompass, and IMU. In short, these scripts move/rotate the robot, query information from sensor nodes, and use the data they’ve collected to adjust parameters such as the wheel separation multiplier or the IMU frame alignment vector. Unfortunately, I cannot provide code for this project. Currently, the GUI and calibration scripts are being hosted on our organization’s private GitLab, which I have not been authorized to share publically. An outline of how the user interface and backend works is given:

  • Utilize KivyMD components to prompt user input 

    • Script and configuration file selection, travel distance, which IMU to calibrate, etc. 

  • Button callbacks initiate subprocesses used to run calibration scripts in parallel

  • ROS subscribers to progress-related topics receive metadata about calibration

    • Visually displayed as progress bars  

  • Revise ROS parameters and relaunch stack to implement changes

As of now, a few of the calibration processes have been tested on the rover, but many more still need to be. My time with the rover was limited, being as other researchers needed it for testing. Due to the conclusion of my internship and subsequent return to school, my co-workers will be picking up where I left off.

Previous
Previous

Robotic Manipulation

Next
Next

Virtual Reality Robot Command Center