Lightweight air-to-air unmanned aerial vehicle target detection model is an open-access work by Qing Cheng, Yazhe Wang, Wenjian He, and Yu Bai.

The rapid expansion of the drone industry has led to a significant increase in the number of low-altitude drones, raising concerns about collision avoidance and countermeasure strategies among these unmanned aerial vehicles. These challenges highlight the urgent need for effective air-to-air drone target detection. An ideal detection model must offer high accuracy, real-time capabilities, and a lightweight network architecture to balance precision and speed on embedded devices.

To meet these requirements, the authors curated a dataset of over 10,000 images of low-altitude operating drones. This dataset includes diverse and intricate backgrounds, significantly enhancing the model’s training capacity. The authors applied a series of enhancements to the YOLOv5 algorithm to achieve lightweight object detection.

A novel feature extraction network, CF2-MC, was introduced to streamline the feature extraction process, while an innovative module, MG, in the feature fusion section, was designed to improve detection accuracy and reduce model complexity. Additionally, the original CIoU loss function was replaced with the EIoU loss function to enhance the model’s accuracy further.

Experimental results demonstrated improved drone target detection accuracy, achieving mAP values of 95.4% on the UAVfly dataset and 82.2% on the Det-Fly dataset. Finally, real-world testing on the Jetson TX2 revealed that the YOLOv5s-ngn model achieved an average inference speed of 14.5 milliseconds per image.

Publication Date– January 2024

Lightweight air-to-air unmanned aerial vehicle target detection model contains the following major sections:

  • Introduction
  • Related work
  • UAVfly dataset
  • Methods
  • Results
  • Discussion
  • Data availability

Post Image- UAVfly dataset (Post Image Credit: Authors)