Mathematical problems of feature matching for vision-guided vehicles with limited resources
DOI:
https://doi.org/10.17721/1812-5409.2025/2.21Keywords:
image matching, feature detection, pattern recognition, computer vision, unmanned autonomous vehiclesAbstract
Vision-based sensing plays a critical role for autonomous vehicles, where image data serve as the primary input for navigation, mapping, state estimation, and motion control. These operate under real-time and resource-constrained conditions, requiring feature detection and matching algorithms that are accurate and computationally efficient. A key component of such pipelines is the identification of keypoints – distinctive image locations x ∈ R^2 within a digital image function I : R^2 → R^c, where c is the channels number, and I(x) ∈ R^c denotes the local pixel intensity. Each detected keypoint is associated with a descriptor d ∈ R^D encoding a local visual structure in a form invariant to translation, rotation, and moderate scale changes. Keypoints between two images x_i and x′_i are matched by comparing descriptors yielding the correspondences x′_i ↔ x_i. Geometric verification is performed by estimating a transformation matrix H ∈ R^{3×3} and accepting ∥x′_i−Hx_i∥ < ε, where ε is a matching tolerance. The proportion of such inlier correspondences referred to as the inlier ratio serves as an accuracy metric. Classical keypoint methods, e.g. the Scale-Invariant Feature Transform (SIFT) versus learned methods, e.g. the SuperPoint (applied to a manually constructed dataset of satellite imagery), are mathematically evaluated. Performance is analyzed in terms of inlier ratio and computational efficiency, reflecting the trade-offs between robustness and resource use. The results pursue design guidelines for integrating lightweight yet accurate vision algorithms into platforms such as UAVs, enabling reliable visual odometry and the Simultaneous Localization and Mapping (SLAM) under constrained hardware conditions.
Pages of the article in the issue: 138 - 141
Language of the article: English
References
Bojanić, D., Bartol, K., Pribanić, T., Petković, T., Donoso, Y. D., & Mas, J. S. (2019). On the comparison of classic and deep keypoint detector and descriptor methods. In IEEE 11th symposium on image and signal processing and analysis (pp. 64–69). https://doi.org/10.1109/ISPA.2019.8868792
Boyun, V., Kasim, A., Voznenko, L., & Matvienko, O. (2025). Three models for creating stereo images from UAV sensor data. International scientific conference Information technologies and computer modelling, 111–114 [in Ukrainian].
DeTone, D., Malisiewicz, T., & Rabinovich, A. (2018). Superpoint: Self-supervised interest point detection and description. In IEEE conference on computer vision and pattern recognition workshops (pp. 224–236). https://doi.org/10.1109/CVPRW.2018.00060
Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6), 381–395. https://doi.org/10.1145/358669.358692
Goodfellow, I., Bengio, Y., Courville, A., & Bengio, Y. (2016). Deep Learning. MIT Press.
Google. (2025). Maps static API (Accessed: 14.08.2025). https://developers.google.com/maps/documentation/maps-static/
Hoffmann, F., Nierobisch, T., Seyffarth, T., & Rudolph, G. (2006). Visual servoing with moments of SIFT features. In IEEE conference on systems, man and cybernetics (p. 4262-4267). https://doi.org/10.1109/ICSMC.2006.384804
Huang, Q., Guo, X., Wang, Y., Sun, H., & Yang, L. (2024). A survey of feature matching methods. IET Image Processing, 18(6), 1385–1410. https://doi.org/10.1049/ipr2.13032
Kawamura, E., Dolph, C., Kannan, K., Brown, N., Lombaerts, T., & Ippolito, C. A. (2023). VSLAM and vision-based approach and landing for advanced air mobility. In AIAA SciTech 2023 Forum (p. 2196). https://doi.org/10.2514/6.2023-2196
Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International journal of computer vision, 60(2), 91–110. https://doi.org/10.1023/B:VISI.0000029664.99615.94
Mur-Artal, R., Montiel, J. M. M., & Tardos, J. D. (2015). ORB-SLAM: A versatile and accurate monocular slam system. IEEE transactions on robotics, 31(5), 1147–1163. https://doi.org/10.1109/TRO.2015.2463671
Prystavkа, P., Cholyshkinа, O., Kovtun, O., & Pryshchepa, D. (2025). Automation of UAV navigation support based on SIFT-like methods. In International Workshop on Computational Intelligence (IWSCI) (pp. 227–239). https://ceur-ws.org/Vol-4035/
Rusyn, B., Lutsyk, O., & Kosarevych, R. (2021). Evaluating the informativity of a training sample for image classification by deep learning methods. Cybernetics and Systems Analysis, 57(6), 853–863. https://doi.org/10.1007/s10559-021-00411-4
Sarlin, P.-E., DeTone, D., Malisiewicz, T., & Rabinovich, A. (2020). Superglue: Learning feature matching with graph neural networks. In IEEE conference on computer vision and pattern recognition (pp. 4938–4947). https://doi.org/10.1109/CVPR42600.2020.00499
Scaramuzza, D., & Fraundorfer, F. (2011). Visual odometry [tutorial]. IEEE robotics & automation magazine, 18(4), 80–92. https://doi.org/10.1109/MRA.2011.943233
Se, S., Lowe, D. G., & Little, J. J. (2005). Vision-based global localization and mapping for mobile robots. IEEE Transactions on robotics, 21(3), 364–375. https://doi.org/10.1109/TRO.2004.839228
Zhu, Q., Liu, C., & Cai, C. (2015). A novel robot visual homing method based on SIFT features. Sensors, 15(10), 26063–26084. https://doi.org/10.3390/s151026063
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Oleh Samoilenko

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
