Jungho Lee
I am a Ph.D candidate at Yonsei University in Seoul, where I work on computer vision and machine learning.
My primary areas of research are 3D computer vision techniques including 3D Gaussian Splatting (3D-GS) and Neural Radiance Fields (NeRF), and video understanding, specifically focusing on action recognition.
I'm always open to collaborations or suggestions. Please feel free to contact me if you have any questions or suggestions. :)
Email  / 
CV  / 
Linkedin  / 
Google Scholar  / 
Github
|
|
|
Hierarchically Decomposed Graph Convolutional Networks for Skeleton-Based Action Recognition
Jungho Lee,
Minhyeok Lee,
Dogyoon Lee,
Sangyoun Lee
IEEE/CVF International Conference on Computer Vision (ICCV), 2023
[
Paper
/
Code
/
bib
]
We propose a hierarchically decomposed graph convolution with a novel hierarchically decomposed graph, which consider the sematic correlation between the joints and the edges of the human skeleton.
|
|
Leveraging Spatio-Temporal Dependency for Skeleton-Based Action Recognition
Jungho Lee,
Minhyeok Lee,
Suhwan Cho,
Sungmin Woo,
Sungjun Jang,
Sangyoun Lee
IEEE/CVF International Conference on Computer Vision (ICCV), 2023
[
Paper
/
Code
/
bib
]
We propose a novel Spatio-Temporal Curve Network (STC-Net) for skeleton-based action recognition, which consists of spatial modules with an spatio-temporal curve (STC) module and graph convolution with dilated kernels (DK-GC)
|
|
Guided Slot Attention for Unsupervised Video Object Segmentation
Minhyeok Lee,
Suhwan Cho,
Dogyoon Lee,
Chaewon Park,
Jungho Lee,
Sangyoun Lee
IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR), 2024
[
Paper
/
Code
/
bib
]
We propose a guided slot attention network to reinforce spatial structural information and obtain better foreground–background separation.
|
|
Multi-scale Structural Graph Convolutional Network for Skeleton-based Action Recognition
Sungjun Jang,
Heansung Lee,
Woo jin Kim,
Jungho Lee,
Sungmin Woo,
Sangyoun Lee
IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2024
[
Paper
/
bib
]
We propose the multi-scale structural graph convolutional network (MSS-GCN) for skeleton-based action recognition, the common intersection graph convolution (CI-GC) leverages the overlapped neighbor information between neighboring vertices for a given pair of root vertices.
|
|
CRiM-GS: Continuous Rigid Motion-Aware Gaussian Splatting from Motion Blur Images
Jungho Lee,
Minhyeok Lee,
Donghyeong Kim,
Dogyoon Lee,
Suhwan Cho,
Sangyoun Lee
Submitted to NeurIPS 2024
[
Project Page
/
Paper
/
Code
/
bib
]
We propose continous motion-aware blur kernel on 3D gaussian splatting utilizing 3D rigid transformation and neural ordinary differential function to reconstruct accurate 3D scene from blurry images with real-time rendering speed.
|
|
SMURF: Continuous Dynamics for Motion-Deblurring Radiance Fields
Jungho Lee,
Dogyoon Lee,
Minhyeok Lee,
Donghyeong Kim,
Sangyoun Lee
Submitted to ECCV 2024
[
Project Page
/
Paper
/
Code
/
bib
]
We propose novel blur kernel for motion estimation based on neural ordinary differential function to construct the deblurred neural radiance fields.
|
|
Synchronizing Vision and Language: Bidirectional Token-Masking AutoEncoder for Referring Image Segmentation
Minhyeok Lee,
Dogyoon Lee,
Jungho Lee,
Suhwan Cho,
Heeseung Choi,
Ig-Jae Kim,
Sangyoun Lee
Submitted to ECCV 2024
[
Paper
/
Code
/
bib
]
We propose novel bi-directional token-masking autoencoder (BTMAE) for referring image segmentation (RIS) to effectively utilize contextual information between language and visual features.
|
|
Sparse-DeRF: Deblurred Neural Radiance Fields from Sparse View
Dogyoon Lee,
Donghyeong Kim,
Jungho Lee,
Minhyeok Lee,
Seunghoon Lee,
Sangyoun Lee
Submitted to TPAMI, 2024
[
Paper
/
Code
/
bib
]
We propose enhanced deblurred neural radiance fields from sparse view settings for more practical applications considering real-world scenarios.
|
|
Treating Motion as Option with Output Selection for Unsupervised Video Object Segmentation
Suhwan Cho,
Minhyeok Lee,
Jungho Lee,
MyeongAh Cho,
Sangyoun Lee
Submitted to Pattern Recognition (PR), 2024
[
Paper
/
Code
/
bib
]
We propose a novel motion-as-option network by treating motion cues as optional and an adaptive output selection algorithm to adopt optimal prediction result at test time.
|
|