Voyaging into Unbounded Dynamic Scenes from a Single View

University of Pennsylvania
✨ICCV 2025✨

Input View

Input View

Novel View

Output View

Scene Voyaging

Scene Voyaging
Input View
Novel View
Scene Voyaging

Input View

Input 1

Voyaging into Dynamic Scenes

Gif 1
Gif 1
Gif 1
Gif 2

Abstract

We study the problem of generating an unbounded dynamic scene from a single view. Since the scene is changing over time, different generated views need to be consistent with the underlying 3D motions. We propose DynamicVoyager that reformulates the dynamic scene generation as a scene outpainting process for new dynamic content. As 2D outpainting models can hardly generate 3D consistent motions from only 2D pixels at a single view, we consider pixels as rays to enrich the pixel input with the ray context, so that the 3D motion consistency can be learned from the ray information. More specifically, we first map the single-view video input to a dynamic point cloud with the estimated video depths. Then we render the partial video at a novel view and outpaint the video with ray contexts from the point cloud to generate 3D consistent motions. We employ the outpainted video to update the point cloud, which is used for scene outpainting from future novel views.

Video

BibTeX

@InProceedings{25iccv/tian_dynvoyager,
    author    = {Tian, Fengrui and Ding, Tianjiao and Luo, Jinqi and Min, Hancheng and Vidal, Ren\'e},
    title     = {Voyaging into Unbounded Dynamic Scenes from a Single View},
    booktitle = {Proceedings of the International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2025}
}