Funding Authority: National Science Foundation
This project focuses on the development of an integrated, intelligent, autonomous unmanned mobile sensor that will enable research into the foundations of the next generation of autonomous Unmanned Aerial Systems (UAS). The objective of this proposal is to develop an unmanned helicopter (about 150 kg weight) along with a mobile landing platform for refueling purposes.
Project Description
Autonomous takeoff and landing of UAVs requires a precise estimation of the pose (x, y, z, roll, pitch and yaw) relative to landing marker or landing site that cannot typically be accomplished with satellite-based navigation systems at the precision and framerate required by flight control systems (typically equal or greater than 50 Hz). During unmanned flight, and especially during landing procedure, the aircraft must have the capability to make split-second decisions based on the current state of the system. Visual sensors can be successfully used during the landing process since they are able to provide the pose with an accuracy typically greater than GPS, sufficient to complete the autonomous landing task. However, vision data provide a considerable amount of information that must be processed in real time in order to be effective. Because of the need to provide a high-frequency pose estimation for precise localization and control performance especially during takeoff and landing, in this project we explored the use of parallel computing on CPU/GPU for the development of pose estimation algorithms using vision data as aiding sensor. We started with the analysis of the state of art concerning the autonomous landing. We found that only few researches exploit the GPU for on-board image processing, and all of them don’t provide a full pose estimation or exploits high-definition images. The other solutions are based on CPUs or relies on Ground Control Station for image elaboration. In addition, none of them allowed on-board high position estimation rate with high definition of images. To overcome the limitation of the state of art, we designed a system for pose estimation based on a fiducial marker and an embedded CPU/GPU board. The software is based on parallel computing approach exploiting the high-parallelism of GPUs. We tested the accuracy, reliability of our system through several on-lab and on-field experiments. We then integrated our system with commercially available autopilot, to effectively perform autonomous landing in GPS-denied environments. I found that, thanks to the high parallelism of the GPU, the developed algorithm is able to detect the landing pad and provide a pose estimation with a minimum framerate greater than 30fps, regardless the complexity of the image to be processed. The possibility to estimate the pose with a frame rate greater than 30fps allows a smooth and reactive control of the UAV, enabling the possibility to land the UAV precisely. In particular, the obtained results show that our algorithm is able to provide pose estimation with a minimum framerate of 30 fps and an image dimension of 640x480 pixels, allowing the detection of the landing pad even from several meters of distance.
Furthermore, the use of a GPU/CPU embedded board allows the UAV to process the video on-board in real time. This avoids the necessity of a powerful ground control station for image processing that would inevitably increase the delay between the acquisition of the image and the use of the elaborated data from the image.
The aircraft would be able to operate with both electric and fuel engines. The instrument will be composed of two tightly coupled components:
- Novel light-weight unmanned helicopter (<150 Kg), and,
- General purpose landing platform that will also serve as a refueling/recharging station and data relay/repository.
The funding has been granted to the Unmanned System Research Institute at the University of Denver, where I've worked as Research Scientist. My research was focused on the development of a vision-based sensor for pose estimation (6 degrees of freedom), using the high-parallelism of GPU for image elaboration and a fiducial marker. The developed system is able to provide pose estimation at 45 frames per seconds with an image resolution of 640x480 pixels.
The sensor has been tested for autonomous landing in indoor/outdoor environments using a custom quadrotor.
The hardware architecture of the sensor is composed essentially of commercially available boards (Jetson TK1, Pixhawk autopilot) and some custom 3D-printed plastic supports (anti-vibration system for camera, Pixhawk support).
The software is composed by two modules:
- Pose estimation algorithm: This module runs on the Jetson TK1 and estimates the 6 degrees of freedom of the UAV. The pose data is then sent to a dedicated UART using the JSON format.
- Pixhawk interface: This module, running on the PixHawk autopilot, decodes the pose estimation message received from the Jetson TK1 and generates the corresponding control actions for the position controller.
The source code of the Pixhawk interface is available online at the following link: https://github.com/alessandro-benini/ardupilot/tree/vision_landing.
The current version of the sensor is designed to be mounted on various VTOL UAVs with minimum effort, like multirotors and traditional helicopters, and is currently part of two patent applications:
- A. Benini, M. J. Rutherford, K. P. Valavanis - Image Processing for Pose Estimation.
- A. Benini, M. J. Rutherford, K. P. Valavanis - Design for a Visual Landing Target.
Below the pose estimation sensor is mounted on different VTOL UAVs. The traditional helicopter is equipped with a high-accuracy IMU and RTK GPS, in addition to the pose estimation sensor.
Following, a video of a quadrotor using the first version of this system performing indoor autonomous landing.
Link to the Project Website: https://www.nsf.gov/awardsearch/showAward?AWD_ID=1229236