Every Pixel Counts: Unsupervised Geometry Learning with Holistic 3D Motion Understanding

Learning to estimate 3D geometry in a single image by watching unlabeledvideos via deep convolutional network has made significant process recently.Current state-of-the-art (SOTA) methods, are based on the learning framework ofrigid structure-from-motion, where only 3D camera ego motion is modeled forgeometry estimation.However, moving objects also exist in many videos, e.g.moving cars in a street scene. In this paper, we tackle such motion byadditionally incorporating per-pixel 3D object motion into the learningframework, which provides holistic 3D scene flow understanding and helps singleimage geometry estimation. Specifically, given two consecutive frames from avideo, we adopt a motion network to predict their relative 3D camera pose and asegmentation mask distinguishing moving objects and rigid background. Anoptical flow network is used to estimate dense 2D per-pixel correspondence. Asingle image depth network predicts depth maps for both images. The four typesof information, i.e. 2D flow, camera pose, segment mask and depth maps, areintegrated into a differentiable holistic 3D motion parser (HMP), whereper-pixel 3D motion for rigid background and moving objects are recovered. Wedesign various losses w.r.t. the two types of 3D motions for training the depthand motion networks, yielding further error reduction for estimated geometry.Finally, in order to solve the 3D motion confusion from monocular videos, wecombine stereo images into joint training. Experiments on KITTI 2015 datasetshow that our estimated geometry, 3D motion and moving object masks, not onlyare constrained to be consistent, but also significantly outperforms other SOTAalgorithms, demonstrating the benefits of our approach.