Robust monocular visual odometry trajectory estimation in urban environments

Автор: Ahmed Abdu, Hakim A. Abdo, Al-Alimi Dalal

Журнал: International Journal of Information Technology and Computer Science @ijitcs

Статья в выпуске: 10 Vol. 11, 2019 года.

Бесплатный доступ

Visual SLAM (Simultaneous Localization and Mapping) is widely used in autonomous robots and vehicles for autonomous navigation. Trajectory estimation is one part of Visual SLAM. Trajectory estimation is needed to estimate camera position in order to align the real image locations. In this paper, we present a new framework for trajectory estimation aided by Monocular Visual Odometry. Our proposed method combines the feature points extracting and matching based on ORB (Oriented FAST and Rotated BRIEF) and PnP (Perspective-n-Point). Thus, it was used a Matlab® dynamic model and an OpenCV/C++ computer graphics platform to perform a very robust monocular Visual Odometry mechanism for trajectory estimation in outdoor environments. Our proposed method displays that meaningful depth estimation can be extracted and frame-to-frame image rotations can be successfully estimated and can be translated in large view even texture-less. The best key-points has been extracted from ORB key point detectors depend on their key-point response value. These extracted key points are used to decrease trajectory estimation errors. Finally, the robustness and high performance of our proposed method were verified on image sequences from public KITTI dataset.

Еще

Trajectory estimation, Monocular Visual Odometry, ORB, feature points, feature matching, PnP

Короткий адрес: https://sciup.org/15016390

IDR: 15016390   |   DOI: 10.5815/ijitcs.2019.10.02

Список литературы Robust monocular visual odometry trajectory estimation in urban environments

  • D. Nister, O. Naroditsky, and J. Bergen, “Visual odometry,” In Computer Vision and Pattern Recognition. Vol. 1. USA: CVPR 2004, pp. 652-659.
  • D. Scaramuzza, F. Fraundorfer, "Visual odometry [Tutorial]. Part I: The first 30 years and fundamentals," IEEE Robot. Autom. Mag., vol. 18, no. 4, pp. 80-92, Jun. 2011.
  • L. Yu, C. Joly, G. Bresson, F. Moutarde, “Improving robustness of monocular urban localization using augmented Street View,” In Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 1–4 November 2016; pp. 513–519.
  • P. A. Zandbergen, S. J. Barbeau, "Positional Accuracy of Assisted GPS Data from High-Sensitivity GPS-enabled Mobile Phones", Journal of Navigation, vol. 64, no. 7, pp. 381-399, 2011.
  • T. Oskiper, S. Samarasekera, R. Kumar, “CamSLAM: Vision Aided Inertial Tracking and Mapping Framework for Large Scale AR Applications,” In Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct), Nantes, France, 9–13 October 2017; pp. 216–217.
  • C. Fischer, P. T. Sukumar, M. Hazas, "Tutorial: Implementation of a pedestrian tracker using foot-mounted inertial sensors", IEEE Pervas. Comput., vol. 12, no. 2, pp. 17-27, Apr./Jun. 2013.
  • A. R. Jimenez, F. Seco, C. Prieto, J. Guevara, "A comparison of Pedestrian Dead-Reckoning algorithms using a low-cost MEMS IMU," 2009 IEEE International Symposium on Intelligent Signal Processing, pp. 37-42, 2009.
  • B. Jiang, U. Neumann, S. You, "A robust hybrid tracking system for outdoor augmented reality," Proc. VR, pp. 3-10, 2004.
  • A. Davison, “Real-time simultaneous localisation and mapping with a single camera,” in Proceedings of the IEEE International Conference on Computer Vision, vol. 2, 2003, pp.1403–1410.
  • X. Gao, T. Zhang, Y. Liu, Q. Yan, “14 Lectures on Visual SLAM,” From Theory to Practice. Publishing House of Electronics Industry, Chinese Version, 1st ed., vol. 1. China, 2017, pp.130–200.
  • B. B. Ready, and C. N. Taylor, “Improving Accuracy of MAV Pose Estimation using Visual Odometry,” Proc. American Control Conference , New York City, USA, July 2007, pp 3721-3726.
  • R. I. Hartley, A. Zisserman, “Multiple View Geometry in Computer Vision,” Cambridge University Press, ISBN: 0521540518, 2nd ed, 2004, pp.310–407.
  • K. Konolige, M. Agrawal, J. Solá, "Large scale visual odometry for rough terrain," Rob. Res., vol. 66, pp. 201-212, Jan. 2011.
  • A. Howard, "Real-time stereo visual odometry for autonomous ground vehicles," Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., pp. 3946-3952, 2008.
  • B. Kitt, A. Geiger, H. Lategahn, "Visual odometry based on stereo image sequences with RANSAC-based outlier rejection scheme," Proc. IEEE Intell. Vehicles Symp., pp. 486-492, Jun. 2010.
  • M. Muja, D. G. Lowe, “Fast approximate nearest neighbors with automatic algorithm configuration,” in Proc. Intl. Conf. Comp. Vision Thy. Appl. (VISAPP), pp. 331–340, 2009.
  • A. Geiger, P. Lenz, R. Urtasun, “Are we ready for autonomous driving? The kitti vision benchmark suite,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Rhode, Island, 18–20 June 2012, pp. 3354–3361.
  • E. Rosten and T. Drummond, “Machine learning for high speed corner detection,” in European conference on computer vision, vol. 3951. pp. 430–443, Springer, 2006.
  • M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “Brief: Binary robust independent elementary features,” in European conference on computer vision, pp. 778–792, Springer, 2010.
  • C. Mei, G. Sibley, M. Cummins, P. Newman, I. Reid, "RSLAM: A system for large-scale mapping in constant time using stereo", Int. J. Comput. Vision, vol. 94, no. 2, pp. 198-214, 2011.
  • A. Geiger, P. Lenz, C. Stiller, R. Urtasun, “Vision meets robotics: The KITTI dataset,” International Journal of Robotics Research, vol. 32, pp. 1229–1235, 2013.
  • G. Klein, D. Murray, "Parallel tracking and mapping on a camera phone," Proc. 8th ACM Int. Symp/Mixed Augmented Reality, pp. 83-86, Oct. 2009.
  • C. Choi, S.-M. Baek, S. Lee, “Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based Visual Servo,” IEEE/RSJ international conference on Intelligent Robots and Systems (IROS), pp. 3983 3989, 2008.
  • H. Strasdat, J. Montiel, A. Davison, “Real time monocular SLAM: Why filter?” in Proc. IEEE Int. Conf. Robotics and Automation, (ICRA). IEEE, pp. 2657–2664, 2010.
  • A. Pretto, E. Menegatti, and E. Pagello, “Omnidirectional dense large-scale mapping and navigation based on meaningful triangulation,” in Proc. IEEE Int. Conf. Robotics and Automation, pp. 3289–3296, 2011.
Еще
Статья научная