CoMoGaussian: Continuous Motion-Aware
Gaussian Splatting from Motion Blur Images

Jungho Lee1   DongHyeong Kim1   Dogyoon Lee1   Suhwan Cho1   Minhyeok Lee1  
Wonjoon Lee1   Taeoh Kim2   Dongyoon Wee2   Sangyoun Lee1,†
1Yonsei University     2NAVER Cloud    
† Corresponding author
PDF Thumbnail

Abstract


3D Gaussian Splatting (3DGS) has gained significant attention for their high-quality novel view rendering, motivating research to address real-world challenges. A critical issue is the camera motion blur caused by movement during exposure, which hinders accurate 3D scene reconstruction. In this study, we propose CoMoGaussian, a Continuous Motion-Aware Gaussian Splatting that reconstructs precise 3D scenes from motion-blurred images while maintaining real-time rendering speed. Considering the complex motion patterns inherent in real-world camera movements, we predict continuous camera trajectories using neural ordinary differential equations (ODEs). To ensure accurate modeling, we employ rigid body transformations, preserving the shape and size of the object but rely on the discrete integration of sampled frames. To better approximate the continuous nature of motion blur, we introduce a continuous motion refinement (CMR) transformation that refines rigid transformations by incorporating additional learnable parameters. By revisiting fundamental camera theory and leveraging advanced neural ODE techniques, we achieve precise modeling of continuous camera trajectories, leading to improved reconstruction accuracy. Extensive experiments demonstrate state-of-the-art performance both quantitatively and qualitatively on benchmark datasets, which include a wide range of motion blur scenarios, from moderate to extreme blur.

Method


Architecture

First, we apply neural ordinary differential equations (ODEs) to model continuous camera movement during exposure time. By continuously modeling the camera trajectory in 3D space, we introduce a model that is fundamentally different from existing methods. Second, we continuously model rigid body transformation over time to accurately capture the shape and size of the static subject throughout the camera movement. By leveraging a continuous representation of rigid motion, our approach better accounts for subtle variations in trajectory, leading to more precise reconstruction of static structures. Third, we introduce a continuous motion refinement (CMR) transformation, which enhances rigid motion modeling by incorporating learnable transformations, enabling more accurate approximation of motion blur trajectories.

Estimated Camera Motion Trajectory


Architecture

The camera trajectory for a single motion-blurred image is represented by colored cones, with the cone's color gradually transitioning from red to light purple as time progresses from $t_{0}$ to $t_{N}$. The visualized trajectories confirm that the camera paths generated by CoMoGaussian are smoothly continuous.

Comparison with State-of-the-Arts


Please click the videos for better view.