GNSS-denied visual localization ameliorative method for UAVs in non-urban environments
250 viewsDOI:
https://doi.org/10.54939/1859-1043.j.mst.92.2023.130-136Keywords:
Visual localization; Unmanned aerial vehicles.Abstract
In the context of Unmanned Aerial Vehicles (UAVs), localization is critical for both military and civilian applications. This is particularly true in environments without urban infrastructure, where Global Navigation Satellite System (GNSS) signals are unavailable. In these settings, vision-based methods have emerged as a promising solution. Despite their potential, current deep learning-based matching algorithms exhibit significant limitations in accurately localizing UAVs. To address this, our paper introduces enhanced algorithms that build upon existing methods. Specifically, we propose the use of the DC-ShadowNet shadow removal algorithm for UAV image preprocessing, a critical step in urban areas where shadows from large structures can obscure ground details, especially under sunny conditions. Additionally, we employ an improved matching algorithm based on the ASpanFormer model to increase accuracy in image matching. Our testing shows that these advancements lead to improved localization accuracy, both on a public dataset and on actual flight data. Furthermore, our method is well-suited for long-duration flights and offers considerable advantages in urban environments when compared to previous state-of-the-art Visual Odometry techniques.
References
[1]. Qin, Tong et al. “VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator.” IEEE Transactions on Robotics 34, 1004-1020, (2017). DOI: https://doi.org/10.1109/TRO.2018.2853729
[2]. Alsalam, Bilal Hazim Younus et al. “Autonomous UAV with vision-basedon-board decision making for remote sensing and precision agriculture.” IEEE Aerospace Conference, 1-12, (2017). DOI: https://doi.org/10.1109/AERO.2017.7943593
[3]. Nguyen, Thien-Minh et al. “VIRAL-Fusion: A Visual-Inertial-Ranging-Lidar Sensor Fusion Approach.” IEEE Transactions on Robotics 38, 958-977, (2020). DOI: https://doi.org/10.1109/TRO.2021.3094157
[4]. Psiaki, M.L., Humphreys, T.E. “GNSS Spoofing and Detection”. Proceedings of the IEEE, 104, 1258-1270, (2016). DOI: https://doi.org/10.1109/JPROC.2016.2526658
[5]. Shermeyer, Jacob et al. “SpaceNet 6: Multi-Sensor All Weather MappingDataset.” 2020 IEEE/CVF Conference on Computer Vision and PatternRecognition Workshops (CVPRW), 768-777, (2020). DOI: https://doi.org/10.1109/CVPRW50498.2020.00106
[6]. Sarlin, Paul-Edouard et al. “SuperGlue: Learning Feature Matching WithGraph Neural Networks.” IEEE/CVF Conference on Computer Vi-sion and Pattern Recognition (CVPR), 4937-4946, (2019). DOI: https://doi.org/10.1109/CVPR42600.2020.00499
[7]. Chen, Hongkai et al. “ASpanFormer: Detector-Free Image Matching withAdaptive Span Transformer.” European Conference on Computer Vision, (2022). DOI: https://doi.org/10.1007/978-3-031-19824-3_2
[8]. Yol, Aurelien et al. “Vision-based absolute localization for unmannedaerial vehicles.” IEEE/RSJ International Conference on IntelligentRobots and Systems, 3429-3434, (2014). DOI: https://doi.org/10.1109/IROS.2014.6943040
[9]. Bartolomei, Luca et al. “Perception-aware Path Planning for UAVs usingSemantic Segmentation.” IEEE/RSJ International Conference onIntelligent Robots and Systems (IROS), 5808-5815, (2020). DOI: https://doi.org/10.1109/IROS45743.2020.9341347
[10]. Shetty, Akshay, G. X. Gao. “UAV Pose Estimation usingCross-view Geolocalization with Satellite Imagery.” InternationalConference on Robotics and Automation (ICRA), 1827-1833, (2018). DOI: https://doi.org/10.1109/ICRA.2019.8794228
[11]. Gu’erin, Eric et al. “Satellite Image Semantic Segmentation.” ArXivabs/2110.05812, (2021).
[12]. Briechle, Kai and Uwe D. Hanebeck. “Template matching using fast nor-malized cross correlation.” SPIE Defense + Commercial Sensing, (2001). DOI: https://doi.org/10.1117/12.421129
[13]. LoweDavid, G. “Distinctive Image Features from Scale-Invariant Key-points.” International Journal of Computer Vision, (2004) DOI: https://doi.org/10.1023/B:VISI.0000029664.99615.94
[14]. A. Macario et al. “A Comprehensive Survey of Visual SLAMAlgorithms.” Robotics 11, 24, (2022). DOI: https://doi.org/10.3390/robotics11010024
[15]. Delmerico, Jeffrey A. and Davide Scaramuzza. “A Benchmark Com-parison of Monocular Visual-Inertial Odometry Algorithms for FlyingRobots.” IEEE International Conference on Robotics and Automa-tion (ICRA), 2502-2509, (2018). DOI: https://doi.org/10.1109/ICRA.2018.8460664
[16]. Gurgu, Marius-Mihail et al. “Vision-Based GNSS-Free Localization forUAVs in the Wild.” 7th International Conference on MechanicalEngineering and Robotics Research (ICMERR), 7-12, (2022). DOI: https://doi.org/10.1109/ICMERR56497.2022.10097798
[17]. Rublee, Ethan et al. “ORB: An efficient alternative to SIFT or SURF.” International Conference on Computer Vision, 2564-2571, (2011). DOI: https://doi.org/10.1109/ICCV.2011.6126544
[18]. Yi, Kwang Moo et al. “LIFT: Learned Invariant Feature Transform.” ArXivabs/1603.09114, (2016).
[19]. DeTone, Daniel et al. “Toward Geometric Deep SLAM.” ArXivabs/1707.07410, (2017).
[20]. DeTone, Daniel et al. “SuperPoint: Self-Supervised Interest Point Detec-tion and Description.” IEEE/CVF Conference on Computer Visionand Pattern Recognition Workshops (CVPRW), 337-33712, (2017). DOI: https://doi.org/10.1109/CVPRW.2018.00060
[21]. Yeying Jin et al. “DC-ShadowNet: Single-Image Hard and Soft ShadowRemoval Using Unsupervised Domain-Classifier Guided Network”. Inter-national Conference on Computer Vision (ICCV), 2207.10434, (2021).
[22]. Jiaming Sun et al. “LoFTR: Detector-Free Local Feature Matchingwith Transformers”. International Conference on Computer Vision(ICCV), 2104.00680, (2021).