HyperAIHyperAI

Command Palette

Search for a command to run...

LoGoPlanner:メトリック感知視覚幾何を備えた局所化基盤型ナビゲーション方策

Jiaqi Peng Wenzhe Cai Yuqiang Yang Tai Wang Yuan Shen Jiangmiao Pang

Abstract

構造化されていない環境における軌道計画は、移動ロボットにとって基本的かつ挑戦的な能力である。従来のモジュール型パイプラインは、認識、局所化、地図作成、計画といった複数のモジュール間で遅延が生じ、誤差が連鎖的に拡大するという問題を抱えている。近年のエンドツーエンド学習手法は、生の視覚観測を直接制御信号または軌道にマッピングすることで、オープンワールド環境における性能と効率の向上が期待されている。しかし、これまでの多くのエンドツーエンドアプローチは、自己状態推定に正確なセンサ外挿(extrinsic)キャリブレーションに依存する別個の局所化モジュールを依然として利用しており、エイボディ(embodiment)や環境間での汎用性に制限が生じている。本研究では、局所化を基盤としたエンドツーエンドナビゲーションフレームワーク「LoGoPlanner」を提案する。本手法は以下の3点で従来の限界を克服する:(1)長時間スパンの視覚幾何学的バックボーンを微調整し、絶対的なメトリックスケールで予測を基盤化することで、正確な局所化に必要な暗黙的な状態推定を実現;(2)過去の観測から周囲のシーン幾何を再構成し、高密度かつ細粒度な環境認識を提供することで、信頼性の高い障害物回避を可能に;(3)上記の補助タスクによって得られた暗黙的な幾何情報に基づいてポリシーを条件づけることで、誤差の伝搬を低減。シミュレーションおよび実環境における評価において、完全なエンドツーエンド設計により累積誤差を低減し、メトリックに敏感な幾何記憶が計画の一貫性と障害物回避性能を向上させ、オラクル局所化ベースラインに対して27.3%以上の性能向上を達成した。また、異なるエイボディや環境間でも優れた汎用性を示した。コードおよびモデルは、https://steinate.github.io/logoplanner.github.io/{project page} にて公開されている。

One-sentence Summary

Peng et al. introduce LoGoPlanner, a fully end-to-end navigation framework that eliminates external localization dependencies through implicit state estimation and metric-aware geometry reconstruction. By finetuning long-horizon visual-geometry backbones with depth-derived scale priors and conditioning diffusion-based trajectory generation on implicit geometric features, it achieves 27.3% better performance than oracle-localization baselines while enabling robust cross-embodiment navigation in unstructured environments.

Key Contributions

  • LoGoPlanner addresses the limitation of existing end-to-end navigation methods that still require separate localization modules with precise sensor calibration by introducing implicit state estimation through a fine-tuned long-horizon visual-geometry backbone that grounds predictions in absolute metric scale. This eliminates reliance on external localization while providing accurate self-state awareness.
  • The framework reconstructs dense scene geometry from historical visual observations to supply fine-grained environmental context for obstacle avoidance, overcoming the partial or scale-ambiguous geometry reconstruction common in prior single-frame approaches. This enables robust spatial reasoning across occluded and rear-view regions.
  • By conditioning the navigation policy directly on this implicit metric-aware geometry, LoGoPlanner reduces error propagation in trajectory planning, achieving over 27.3% improvement over oracle-localization baselines in both simulation and real-world evaluations while demonstrating strong generalization across robot embodiments and environments.

Introduction

Mobile robots navigating unstructured environments require robust trajectory planning, but traditional modular pipelines suffer from latency and cascading errors across perception, localization, and planning stages. While end-to-end learning methods promise efficiency by mapping raw visuals directly to control signals, they still critically depend on external localization modules that require precise sensor calibration, limiting generalization across robots and environments. Monocular visual odometry approaches further struggle with inherent scale ambiguity and drift, often needing additional sensors or scene priors that reduce real-world applicability. The authors overcome these limitations by introducing LoGoPlanner, an end-to-end framework that integrates metric-scale visual geometry estimation directly into navigation. It leverages a finetuned visual-geometry backbone to implicitly estimate absolute scale and state, reconstructs scene geometry from historical observations for obstacle avoidance, and conditions the policy on this bootstrapped geometry to minimize error propagation without external localization inputs.

Method

The authors leverage a unified end-to-end architecture—LoGoPlanner—that jointly learns metric-aware perception, implicit localization, and trajectory generation without relying on external modules. The framework is built upon a pretrained video geometry backbone, enhanced with depth-derived scale priors to enable metric-scale scene reconstruction. At its core, the system processes causal sequences of RGB-D observations to extract compact, world-aligned point embeddings that encode both fine-grained geometry and long-term ego-motion.

Refer to the framework diagram, which illustrates the overall pipeline. The architecture begins with a vision transformer (ViT-L) that processes sequential RGB frames into patch tokens. These tokens are fused at the patch level with geometric tokens derived from depth maps using a lightweight ViT-S encoder. The fused tokens are then processed through a transformer decoder augmented with Rotary Position Embedding (RoPE) to produce metric-aware per-frame features:

timetric=Attention(RoPE((tiI,tiD),pos))\mathbf{t}_i^{\mathrm{metric}} = \mathrm{Attention}\big( \mathrm{RoPE}( (\mathbf{t}_i^I, \mathbf{t}_i^D), \mathrm{pos} ) \big)timetric=Attention(RoPE((tiI,tiD),pos))

where posRK×2\mathrm{pos} \in \mathbb{R}^{K \times 2}posRK×2 encodes 2D spatial coordinates to preserve positional relationships. To improve reconstruction fidelity, auxiliary supervision is applied via two task-specific heads: a local point head and a camera pose head. The local point head maps metric tokens to latent features hip\mathbf{h}_i^phip, which are decoded into canonical 3D points in the camera frame:

hip=ϕp(timetric),P^ilocal=fp(hip)\mathbf{h}_i^p = \phi_p( \mathbf{t}_i^{\mathrm{metric}} ), \qquad \widehat{P}_i^{\mathrm{local}} = f_p( \mathbf{h}_i^p )hip=ϕp(timetric),Pilocal=fp(hip)

These are supervised using the pinhole model:

pcam,i(u,v)=Di(u,v)K1[uv1]\mathbf{p}_{\mathrm{cam},i}(u,v) = D_i(u,v) \, K^{-1} [u \, v \, 1]^\toppcam,i(u,v)=Di(u,v)K1[uv1]

In parallel, the camera pose head maps the same metric tokens to features hic\mathbf{h}_i^chic, which are decoded into camera-to-world transformations T^c,i\widehat{T}_{\mathrm{c},i}Tc,i, defined relative to the chassis frame of the last time step to ensure planning consistency.

To bridge perception and control without explicit calibration, the authors decouple camera and chassis pose estimation. The chassis pose T^b,i\widehat{T}_{\mathrm{b},i}Tb,i and relative goal g^i\widehat{g}_igi are predicted from hic\mathbf{h}_i^chic:

T^b,i=fb(hic)g^i=fq(hic,g)\widehat{T}_{\mathrm{b},i} = f_b( \mathbf{h}_i^{\mathrm{c}} ) \qquad \widehat{g}_i = f_q( \mathbf{h}_i^{\mathrm{c}}, g )Tb,i=fb(hic)gi=fq(hic,g)

The extrinsic transformation TextT_{\mathrm{ext}}Text—capturing camera height and pitch—is implicitly learned through training data with varying camera configurations, enabling cross-embodiment generalization.

Rather than propagating explicit poses or point clouds, the system employs a query-based design inspired by UniAD. State queries QSQ_SQS and geometric queries QGQ_GQG extract implicit representations via cross-attention:

QS=CrossAttn(Qs,hc)QG=CrossAttn(Qd,hp)\begin{array}{r} Q_S = \mathrm{CrossAttn}( Q_s, \mathbf{h}^{\mathrm{c}} ) \\ Q_G = \mathrm{CrossAttn}( Q_d, \mathbf{h}^{\mathrm{p}} ) \end{array}QS=CrossAttn(Qs,hc)QG=CrossAttn(Qd,hp)

These are fused with goal embeddings to form a planning context query QPQ_PQP, which conditions a diffusion policy head. The policy generates trajectory chunks at=(Δxt,Δyt,Δθt)\boldsymbol{a}_{t} = (\Delta x_{t}, \Delta y_{t}, \Delta \theta_{t})at=(Δxt,Δyt,Δθt) by iteratively denoising noisy action sequences:

αk1=α(αkγϵθ(QP,αk,k)+N(0,σ2I))\pmb{\alpha}^{k-1} = \alpha( \pmb{\alpha}^k - \gamma \epsilon_\theta( Q_P, \pmb{\alpha}^k, k ) + \mathcal{N}(0, \sigma^2 I) )αk1=α(αkγϵθ(QP,αk,k)+N(0,σ2I))

where ϵθ\epsilon_{\theta}ϵθ is the noise prediction network, and α,γ\alpha, \gammaα,γ are diffusion schedule parameters. This iterative refinement ensures collision-free, feasible trajectories while avoiding error accumulation from explicit intermediate representations.

Experiment

  • Simulation on 40 InternScenes unseen environments: LoGoPlanner improved Home Success Rate by 27.3 percentage points and Success weighted by Path Length by 21.3% over ViPlanner, validating robust collision-free navigation without external localization.
  • Real-world tests on TurtleBot, Unitree Go2, and G1 platforms: Achieved 90.0% Success Rate and 82.0% Success weighted by Path Length on Unitree Go2 in cluttered home scenes, demonstrating cross-platform generalization without SLAM or visual odometry.
  • Ablation studies: Confirmed Point Cloud supervision is critical for obstacle avoidance, and scale-injected geometric backbone reduces navigation error while improving planning accuracy.

The authors evaluate navigation performance in simulation across home and commercial scenes, measuring success rate (SR) and success weighted by path length (SPL). LoGoPlanner, which performs implicit state estimation without external localization, outperforms all baselines, achieving 57.3 SR and 52.4 SPL in home scenes and 67.1 SR and 63.9 SPL in commercial scenes. Results show LoGoPlanner improves home scene performance by 27.3 percentage points in SR and 21.3% in SPL over ViPlanner, highlighting the benefit of integrating self-localization with geometry-aware planning.

The authors evaluate LoGoPlanner against iPlanner and ViPlanner in real-world settings across three robotic platforms and environment types. LoGoPlanner achieves the highest success rates in all scenarios, notably 85% on TurtleBot in office environments, 70% on Unitree Go2 in home settings, and 50% on Unitree G1 in industrial scenes, outperforming both baselines. Results show LoGoPlanner’s ability to operate without external localization and maintain robust performance despite platform-induced camera jitter and complex obstacle configurations.

The authors evaluate ablation variants of their model by removing auxiliary tasks—Odometry, Goal, and Point Cloud—and measure performance in home and commercial scenes using Success Rate (SR) and Success weighted by Path Length (SPL). Results show that including all three modules yields the highest performance, with SR reaching 57.3 in home scenes and 67.1 in commercial scenes, indicating that joint supervision improves trajectory consistency and spatial perception. Omitting any module degrades performance, confirming that each contributes meaningfully to robust navigation.

The authors evaluate different video geometry backbones for navigation performance, finding that VGGT with scale injection achieves the highest success rate and SPL in both home and commercial scenes while reducing navigation and planning errors. Results show that incorporating metric-scale supervision improves trajectory accuracy and planning consistency compared to single-frame or unscaled multi-frame models.


Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing

HyperAI Newsletters

最新情報を購読する
北京時間 毎週月曜日の午前9時 に、その週の最新情報をメールでお届けします
メール配信サービスは MailChimp によって提供されています