Reconstruction of driving scenarios with DVGT. Click on any thumbnail below to view the 3D reconstruction.
DVGT significantly outperforms all other methods across various driving scenarios. Below we compare the dense 3D reconstruction results of DVGT (Ours) against VGGT and MapAnything.
Perceiving and reconstructing 3D scene geometry from visual inputs is crucial for autonomous driving. However, it still lacks a driving-targeted dense geometry perception model that can adapt to different scenarios and camera configurations. To bridge this gap, we propose a Driving Visual Geometry Transformer (DVGT), which reconstructs a global dense 3D point map from a sequence of unposed multi-view visual inputs. We first extract visual features for each image using a DINO backbone, and employ alternating intra-view local attention, cross-view spatial attention, and cross-frame temporal attention to infer geometric relations across images. We then use multiple heads to decode a global point map in the ego coordinate of the first frame and the ego poses for each frame. Unlike conventional methods that rely on precise camera parameters, DVGT is free of explicit 3D geometric priors, enabling flexible processing of arbitrary camera configurations. DVGT directly predicts metric-scaled geometry from image sequences, eliminating the need for post-alignment with external sensors. Trained on a large mixture of driving datasets including nuScenes, OpenScene, Waymo, KITTI, and DDAD, DVGT significantly outperforms existing models on various scenarios.
DVGT leverages a DINO-pretrained ViT-L to tokenize input images, and augments them with learnable ego tokens for ego pose estimation. It then employs an alternating mechanism of intra-view local attention, cross-view spatial attention, and cross-frame temporal attention to effectively aggregate spatial-temporal features. Finally, a pose head regresses frame-wise ego poses, while a point map head decodes dense 3D point maps for each image.
DVGT significantly outperforms existing models in both performance and efficiency across diverse scenarios. As detailed in the tables below, we provide a comprehensive quantitative evaluation covering 3D point reconstruction, ray depth estimation, ego pose estimation, and depth prediction compared with LiDAR ground truth. Notably, our method demonstrates superior accuracy across all evaluated datasets on ray depth estimation (δ < 1.25).
@inproceedings{zuo2025dvgt,
title={DVGT: Driving Visual Geometry Transformer},
author={Zuo, Sicheng and Xie, Zixun and Zheng, Wenzhao and Xu, Shaoqing and Li, Fang and Jiang, Shengyin and Chen, Long and Yang, Zhi-Xin and Lu, Jiwen},
booktitle={arXiv preprint arXiv:2512.16919},
year={2025}
}