孫作雷+黃嘉明+張波
DOI:10.13340/j.jsmu.2016.04.016
文章編號(hào):1672-9498(2016)04008705
摘要:為提高單目視覺(jué)里程計(jì)算法的性能,從視覺(jué)特征選取和特征誤匹配剔除兩個(gè)方面進(jìn)行研究.采用SURF描述子提取單目圖像的特征點(diǎn),并匹配相鄰圖像序列的特征,使用歸一化線性八點(diǎn)法依次得到基礎(chǔ)矩陣和本質(zhì)矩陣.利用三角測(cè)量求解匹配點(diǎn)的三維坐標(biāo),進(jìn)而根據(jù)2D2D模型解算出兩幀圖像間相機(jī)運(yùn)動(dòng)的旋轉(zhuǎn)和平移,從而構(gòu)建單目視覺(jué)里程計(jì)系統(tǒng).為提高算法性能,使用RANSAC算法清除初次計(jì)算的特征誤匹配,并利用地面數(shù)據(jù)獲取相機(jī)運(yùn)動(dòng)的平移尺度.實(shí)驗(yàn)結(jié)果驗(yàn)證了RANSAC算法能夠有效剔除特征誤匹配,降低單目視覺(jué)里程計(jì)的累積誤差.
關(guān)鍵詞:
機(jī)器人定位; 視覺(jué)里程計(jì); 特征提純; 機(jī)器視覺(jué); SURF; RANSAC
中圖分類號(hào): TP242
文獻(xiàn)標(biāo)志碼: A
Monocular visual odometry with RANSACbased outlier rejection
SUN Zuolei1, HUANG Jiaming1, ZHANG Bo2
(1. Information Engineering College, Shanghai Maritime University, Shanghai 201306, China;
2. Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201210, China)
Abstract:
In order to enhance the algorithm performance of the monocular visual odometry, the visual feature extraction and the mismatched feature rejection are studied. The SURF descriptor is employed to extract features of monocular images and match features in the adjacent image sequence. The fundamental matrix and essential matrix are derived using the normalized eightpoint method. The 3D coordinates of matching points are calculated with the triangulation, and then the camera translation and rotation between two frames of images are estimated based on 2D2D model. As a result, the system of monocular visual odometry is constructed. To improve the algorithm performance, RANSAC algorithm is adopted to reject the feature mismatching in the first calculation, and the camera translation scale is achieved by the ground data. The experiment result demonstrates that RANSAC algorithm can effectively eliminate feature mismatching and reduce the cumulative error of the monocular visual odometry.
Key words:
robot localization; visual odometry; feature refining; computer vision; SURF; RANSAC
收稿日期: 20151208修回日期: 20160324
基金項(xiàng)目: 國(guó)家自然科學(xué)基金(61105097, 51279098, 61401270);上海市教育委員會(huì)科研創(chuàng)新項(xiàng)目(13YZ081)
作者簡(jiǎn)介:
孫作雷(1982—),男,山東棗莊人,副教授,博士,研究方向?yàn)橐苿?dòng)機(jī)器人導(dǎo)航和機(jī)器學(xué)習(xí), (Email)szl@mpig.com.cn
4結(jié)論
本文使用RANSAC算法提純SURF特征點(diǎn)匹配,以提升單目視覺(jué)里程計(jì)(VO)性能.實(shí)驗(yàn)結(jié)果證明:?jiǎn)渭兊厥褂帽嚷蕼y(cè)試法去除誤匹配的誤差較大;使用RANSAC算法進(jìn)行誤匹配剔除并配合歸一化線性八點(diǎn)法能有效降低VO的累積誤差.
參考文獻(xiàn):
[1]SCARAMUZZA D, FRAUNDORFER F. Visual odometry part I: the first 30 years and fundanmentals[J]. IEEE Robotics & Automation Magazine, 2011, 18(4): 8092. DOI: 10.1109/MRA.2011.943233.
[2]FORSTER C, PIZZOLI M, SCARAMUZZA D. SVO: fast semidirect monocular visual odometry[C]// Robotics and Automation (ICRA), 2014 IEEE International Conference on. Hong Kong: IEEE, 2014: 1522. DOI: 10.1109/ICRA.2014.6906584.
[3]HANSEN P, ALISMAIL H, RANDER P, et al. Monocular visual odometry for robot localization in LNG pipes[C]// Robotics and Automation (ICRA), 2011 IEEE International Conference on. Shanghai: IEEE, 2011: 31113116. DOI: 10.1109/ICRA.2011.5979681.
[4]鄭馳, 項(xiàng)志宇, 劉濟(jì)林. 融合光流與特征點(diǎn)匹配的單目視覺(jué)里程計(jì)[J]. 浙江大學(xué)學(xué)報(bào)(工學(xué)版), 2014, 48(2): 279284. DOI: 10.3785/j.issn.1008973X.2014.02.014.
[5]FRAUNDORFER F, SCARAMUZZA D. Visual odometry part II: matching, robustness, optimization, and applications[J]. IEEE Robotics & Automation Magazine, 2012, 19(2): 7890. DOI: 10.1109/MRA.2012.2182810.
[6]SANGINETO E. Pose and expression independent facial landmark localization using denseSURF and the Hausdorff distance[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(3): 624638. DOI: 10.1109/TPAMI.2012.87.
[7]吳福朝. 計(jì)算機(jī)視覺(jué)中的數(shù)學(xué)方法[M]. 北京: 科學(xué)出版社, 2008: 6377.
[8]CHOI S, PARK J, YU W. Resolving scale ambiguity for monocular visual odometry[C]// Ubiquitous Robots and Ambient Intelligence (URAI), 2013 10th International Conference on. Jeju: IEEE, 2013: 604608. DOI: 10.1109/URAI.2013.6677403.
[9]GEIGER A, LENZ P, STILLER C, et al. Vision meets robotics: the KITTI dataset[J]. The International Journal of Robotics Research, 2013: 32(11): 12311237.
[10]GEIGER A, ZIEGLER J, STILLER C. Stereoscan: dense 3D reconstruction in realtime[C]// Intelligent Vehicles Symposium (IV), 2011 IEEE. BadenBaden: IEEE, 2011: 963968. DOI: 10.1109/IVS.2011.5940405.
(編輯賈裙平)