Revisiting Global Translation Estimation with Feature Tracks

Abstract

Global translation estimation is a highly challenging step in the global structure from motion (SfM) algorithm. Many existing methods depend solely on relative translations, leading to inaccuracies in low parallax scenes and degradation under collinear camera motion. While recent approaches aim to address these issues by incorporating feature tracks into objective functions, they are often sensitive to outliers. In this paper, we first revisit global translation estimation methods with feature tracks and categorize them into explicit and implicit methods. Then, we highlight the superiority of the objective function based on the cross-product distance metric and propose a novel explicit global translation estimation framework that integrates both relative translations and feature tracks as input. To enhance the accuracy of input observations, we re-estimate relative translations with the coplanarity constraint of the epipolar plane and propose a simple yet effective strategy to select reliable feature tracks. Finally, the effectiveness of our approach is demonstrated through experiments on urban image sequences and unordered Internet images, showcasing its superior accuracy and robustness compared to many state-of-the-art techniques.

Publication
IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2024
Peilin Tao
Peilin Tao
MS student (2023-now)
Hainan Cui
Hainan Cui
Associate Professor
Mengqi Rong
Mengqi Rong
Assistant Professor
Shuhan Shen
Shuhan Shen
Professor