3D-LFM: Lifting Foundation Model

The lifting of 3D structure and camera from 2D landmarks is at thecornerstone of the entire discipline of computer vision. Traditional methodshave been confined to specific rigid objects, such as those inPerspective-n-Point (PnP) problems, but deep learning has expanded ourcapability to reconstruct a wide range of object classes (e.g. C3DPO and PAUL)with resilience to noise, occlusions, and perspective distortions. All thesetechniques, however, have been limited by the fundamental need to establishcorrespondences across the 3D training data -- significantly limiting theirutility to applications where one has an abundance of "in-correspondence" 3Ddata. Our approach harnesses the inherent permutation equivariance oftransformers to manage varying number of points per 3D data instance,withstands occlusions, and generalizes to unseen categories. We demonstratestate of the art performance across 2D-3D lifting task benchmarks. Since ourapproach can be trained across such a broad class of structures we refer to itsimply as a 3D Lifting Foundation Model (3D-LFM) -- the first of its kind.