Large-scale, drift-free SLAM using highly robustified building model constraints
Résumé
Constrained key-frame based local bundle adjustment is at the core of many recent systems that address the problem of large-scale, georeferenced SLAM based on a monocular camera and on data from inexpensive sensors and/or databases. The majority of these methods, however, impose constraints that result from proprioceptive sensors (e.g. IMUs, GPS, Odometry) while ignoring the possibility of explicitly constraining the structure (e.g. point cloud) resulting from the reconstruction process. Moreover, research on on-line interactions between SLAM and deep learning methods remains scarce, and as a result, few SLAM systems take advantage of deep architectures. We explore both these areas in this work: we use a fast deep neural network to infer semantic and structural information about the environment, and using a Bayesian framework, inject the results into a bundle adjustment process that constrains the 3d point cloud to texture-less 3d building models.