Multi-size drone detection using YOLOv5 network130 views
Keywords:Phát hiện máy bay không người lái; Thị giác máy tính; Yolov5; Mạng nơron phức tạp; IoU.
With increasingly modern technology, advanced and flexible functions, compact design, and low cost, drones are recently used in many fields with different effective purposes. Unlike beneficial applications, hostile forces leverage drones to explore the terrain, carry illegal explosives, and so on. Those applications can seriously threaten national security and defense. To prevent illegal drones effectively, we apply deep neural networks to detect the illegal drones in a variety of conditions and different sizes of drones. Accordingly, a computer-based system using modern cameras combined with an algorithmic model can solve the complex drone detection problem. Therefore, an emerging complex neural network approach based on YOLOv5 is proposed in this paper. With the method, we achieve a very expected result (confidence of 0.993 @0.5IOU), which meets the requirements of the drone detection problem.
. Allahham, Mhd Saria et al. “Deep Learning for RF-Based Drone Detection and Identification: A Multi-Channel 1-D Convolutional Neural Networks Approach.” 2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT), pp. 112-117, (2020).
. A. Schumann, L. Sommer, J. Klatte, T. Schuchert, and J. Beyerer, "Deep cross-domain flying object classification for robust UAV detection," 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 1-6, DOI: 10.1109/AVSS.2017.8078558. (2017).
. Saqib, Muhammad et al. “A study on detecting drones using deep convolutional neural networks.” 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 1-5, (2017).
. Chaudhari, Sujata et al. “Yolo Real Time Object Detection.” International Journal of Computer Trends and Technology 68, pp. 70-76, (2020).
. Huang, Xin et al. “PP-YOLOv2: A Practical Object Detector.” ArXiv abs/2104.10419, (2021).
. Redmon, Joseph, Ali Farhadi. “YOLOv3: An Incremental Improvement.” ArXiv abs/1804.02767, (2018).
. Bochkovskiy, Alexey et al. “YOLOv4: Optimal Speed and Accuracy of Object Detection.” ArXiv abs/2004.10934, (2020).
. Chen, Yuwen et al. “Ship detection in optical sensing images based on YOLOv5.” International Conference on Graphic and Image Processing, (2021).
. Aker, Cemal and Sinan Kalkan. “Using deep networks for drone detection”. 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 1-6, (2017).
. Viet, Pham Van. “A Combination Of Faster R-Cnn And Yolov2 for Drone Detection in Images.” TNU Journal of Science and Technology, (2021).
. Wang, Chien-Yao et al. “CSPNet: A New Backbone that can Enhance Learning Capability of CNN.” 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1571-1580, (2020).
. Gao, Yan et al. “Decoupled IoU Regression for Object Detection.” Proceedings of the 29th ACM International Conference on Multimedia, (2021).
. Jocher, Glenn R. et al. “Ultralytics/yolov5: v3.0.”, (2020).
. “A Forest Fire Detection System Based on Ensemble Learning” - Scientific Figure on ResearchGate. Available from: https://www.researchgate.net/figure/The-network-architecture-of-Yolov5-It-consists-of-three-parts-1-Backbone-CSPDarknet_fig1_349299852.
. Xu, Renjie & Lin, Haifeng & Lu, Kangjie & Cao, Lin & Liu, Yunfei. “A Forest Fire Detection System Based on Ensemble Learning. Forests”, Electronics, Close-Range Sensors and Artificial Intelligence in Forestry, (2021).
. X. Farhodov, O. -H. Kwon, K. W. Kang, S. -H. Lee and K. -R. Kwon, "Faster RCNN Detection Based OpenCV CSRT Tracker Using Drone Data", International Conference on Information Science and Communications Technologies (ICISCT), pp. 1-3, (2019).
. W. Budiharto, A. A. S. Gunawan, J. S. Suroso, A. Chowanda, A. Patrik and G. Utama, "Fast Object Detection for Quadcopter Drone Using Deep Learning", 3rd International Conference on Computer and Communication Systems (ICCCS), pp. 192-195, (2018).