This paper investigates means to benchmark methods for camera pose localization and tracking that in addition to a camera image make use of inertial sensor measurements. In particular the direction of the gravity has recently shown to provide useful information to aid vision-based approaches making them outperform visiononly methods. Obviously, it is desirable to benchmark the performance of such methods and to compare them with state-of-the-art approaches, but to the best of our knowledge, all publicly available benchmarking datasets unfortunately lack gravity information. We present different simple means to generate one.s own benchmarks for inertial sensor-aided localization and tracking methods and most considerably show how existing datasets, that do not have inertial sensor data, can be exploited. We demonstrate how to evaluate Gravity-Aligned Feature Descriptors (GAFD) and Gravity-Rectified Feature Descriptors (GREFD) on an existing benchmark dataset with ground truth poses. By synthesizing gravity measurements from these poses we achieve similar results to using real sensor measurements at significantly less effort. Most importantly, the proposed procedure enables the comparison with existing evaluation results on the same data. The paper concludes with a requirements analysis and suggestions for the design of future benchmarking datasets for localization and tracking methods.
«
This paper investigates means to benchmark methods for camera pose localization and tracking that in addition to a camera image make use of inertial sensor measurements. In particular the direction of the gravity has recently shown to provide useful information to aid vision-based approaches making them outperform visiononly methods. Obviously, it is desirable to benchmark the performance of such methods and to compare them with state-of-the-art approaches, but to the best of our knowledg...
»