DCARL: A Divide-and-Conquer Framework for Autoregressive Long-Trajectory Video Generation

Junyi Ouyang1,2, Wenbin Teng1,2, Gonglin Chen1,2, Yajie Zhao1,2, Haiwei Chen1,2

1 Institute for Creative Technologies

2 University of Southern California

junyiouy@usc.edu, wenbinte@usc.edu, gonglinc@usc.edu, zhao@ict.usc.edu, chw9308@hotmail.com

Abstract

Long-trajectory video generation is a crucial yet challenging task for world modeling, primarily due to the limited scalability of existing video diffusion models (VDMs). Autoregressive models, while offering infinite rollout, suffer from visual drift and poor controllability. To address these issues, we propose DCARL, a novel divide-and-conquer, autoregressive framework that effectively combines the structural stability of the divide-and-conquer scheme with the high-fidelity generation of VDMs. Our approach first employs a dedicated Keyframe Generator trained without temporal compression to establish long-range, globally consistent structural anchors. Subsequently, an Interpolation Generator synthesizes the dense frames in an autoregressive manner with overlapping segments, utilizing the keyframes for global context and a single clean preceding frame for local coherence. Trained on a large-scale internet long trajectory video dataset, our method achieves superior performance in both visual quality (lower FID and FVD) and camera adherence (lower ATE and ARE) compared to state-of-the-art autoregressive and divide-and-conquer baselines, demonstrating stable and high-fidelity generation for long trajectory videos up to 32 seconds in length.

Methodology

DCARL Pipeline Overview
Figure 1: Our Two-Stage Autoregressive Pipeline for Long-Term Video Synthesis.
Our framework follows a two-stage divide-and-conquer pipeline to synthesize long video sequences conditioned on multimodal inputs (initial image, camera trajectory, and caption): This design decouples global structural planning from local detail interpolation, effectively reducing cumulative drift while maintaining precise adherence to the given trajectory.

Quantitative Results

Main Results Table
Quantitative comparison of our method against state-of-the-art baselines.

Qualitative Results (32s Generation)

Representative 32-second videos generated by our method on the OpenDV dataset.

OpenDV Example 1

OpenDV Example 2

Results on NuScenes Dataset (16s)

NuScenes Example 1

NuScenes Example 2

Results on DL3DV Dataset (32s)

DL3DV Example 1

DL3DV Example 2

Ablation Studies

Visual results demonstrating the impact of our specific architectural designs.

Keyframe Spatial-Structural Preservation

Motion-Inductive Noisy Conditioning

Seamless Boundary Consistency

Necessity of Keyframe Anchoring

3D Scene Reconstruction

Screen recordings demonstrating the navigable 3D scenes reconstructed from our generated videos.

Reconstructed Scene 1

Reconstructed Scene 2