* Equal contribution
† Project leader
4D driving simulation is essential for developing realistic autonomous driving simulators. Despite advancements in current methods for generating driving scenes, significant challenges remain in view transformation and spatial-temporal dynamic modeling. To address these limitations, we propose Spatial-Temporal simulAtion for drivinG (Stag-1), which aims to reconstruct real-world scenes and design a controllable generative network to achieve 4D simulation. Stag-1 constructs continuous 4D point cloud scenes using surround-view data from autonomous vehicles. It decouples spatial-temporal relationships and produces coherent keyframe videos. Additionally, Stag-1 leverages video generation models to create lifelike, controllable 4D driving simulation videos from any perspective. To expand the range of view generation, we train vehicle motion videos based on decomposed camera poses, enhancing modeling capabilities for distant scenes. Additionally, we reconstruct vehicle camera trajectories to integrate 3D points across consecutive views, enabling comprehensive scene understanding along the temporal dimension. Following extensive multi-level scene training, Stag-1 can simulate from any desired viewpoint and achieve deep understanding of scene evolution under static spatial-temporal conditions. Compared to existing methods, our approach shows promising performance in multi-view scene consistency, background coherence, and accuracy, and contributes to the ongoing advancements in realistic autonomous driving simulation.
Our Stag-1 framework is a 4D generative model for autonomous driving simulation. It reconstructs 4D scenes from point clouds and projects them into continuous, sparse keyframes. A spatial-temporal fusion framework is then used to generate simulation scenarios. Two key design aspects guide our approach: 1) We develop a method for 4D point cloud matching and keyframe reconstruction, ensuring the accurate generation of continuous, sparse keyframes that account for both vehicle motion and the need for spatial-temporal decoupling in simulation. 2) We build a spatial-temporal fusion framework that integrates surround-view information and continuous scene projection to ensure accurate simulation generation.
The Stag-1 training framework pipeline is designed in two stages. In the time-focused stage, we use even keyframes from a single viewpoint to generate a 4D point cloud, which is then projected with odd keyframe parameters as conditions, with the odd keyframes serving as labels for training. In the spatial-focused stage, surround-view information is incorporated to extract inter-image features from the surrounding viewpoints, followed by the training of the spatial-temporal block.
Qualitative comparison on the Waymo-Street Datasets. The results show that our method outperforms existing approaches in scene reconstruction.
Quantitative comparison of our model with the 3DGS method on both reconstruction and novel view synthesis (NVS). Performance is evaluated using the Waymo-NOTR dataset, with 'PSNR*' and 'SSIM*' referring to metrics for dynamic objects, where ENRF refers to EmerNeRF and S3Gaussian refers to S3Gaussian. The best results are highlighted in pink, and the second best in blue.
Our code is based on ViewCrafter and MagicDrive.
Also, thanks to these excellent open-sourced repositories: Vista and S3Gaussian.
@article{wang2024stag-1,
author = {Wang, Lening and Zheng, Wenzhao and Du, Dalong and Zhang, Yunpeng and Ren, Yilong and Jiang, Han and Cui, Zhiyong and Yu, Haiyang and Zhou, Jie and Lu, Jiwen and Zhang, Shanghang},
title = {Stag-1: Towards Realistic 4D Driving Simulation with Video Generation Model},
journal = {arXiv preprint arXiv:2412.05280},
year = {2024},
}