Extreme Low-Light Raw Videos Denoising
극단적 저조도 RAW 비디오 디노이징
- 주제(키워드) deep-learning , image processing , video , dataset
- 발행기관 서강대학교 일반대학원
- 지도교수 강석주
- 발행년도 2026
- 학위수여년월 2026. 2
- 학위명 석사
- 학과 및 전공 일반대학원 전자공학과
- 실제URI http://www.dcollection.net/handler/sogang/000000082357
- UCI I804:11029-000000082357
- 본문언어 영어
- 저작권 논문은 저작권에 의해 보호받습니다.
초록(요약문)
저조도 환경에서 획득된 영상은 극도로 제한된 광량으로 인해 심각한 잡음이 발생하며,이는다양한컴퓨터비전응용에서성능저하를유발한다.특히,극저조 도 조건에서의 RAW 영상 복원에 대한 기존 연구는 매우 제한적이며, 이를 위한 공개데이터셋또한부재한상황이다.영상복원의핵심도전과제는잡음을효과 적으로제거하면서도세부구조를보존하는균형을유지하는것이다.과도한잡음 제거는미세한텍스처가소실되는문제를,부족한복원은잔존잡음으로인해후속 인식성능저하를초래한다. 본연구에서는극저조도환경에서촬영된다양한장면과긴시간범위를포함하 는새로운 RAW영상데이터셋을구축하였으며,노이즈-클린쌍으로구성된학습 데이터를 제공한다. 또한, 이러한 환경에 최적화된 경량 RAW 비디오 복원 프레 임워크를제안한다.제안모델은얕은단계의초기잡음제거,변형가능한합성곱 기반의 시간 정렬, 그리고 시공간적 어텐션 모듈을 결합하여 잡음을 효과적으로 억제하면서 텍스처 보존과 시간적 일관성을 강화한다. 더불어, 텍스처 손실을 방 지하기위해텍스처인지손실을도입하여과도한평활화를완화하였다. 실험 결과, 본 연구의 방법은 기존 최신 기법들과 비교하여 PSNR 및 SSIM 지 표에서우수한성능을보이며,실제및합성극저조도영상모두에서뛰어난잡음 억제성능과함께구조적선명도를안정적으로유지함을확인하였다.
more초록(요약문)
Denoising plays a vital role in many computer vision applications, particularly in scenarios involving extreme low-light conditions. Yet, research specifically targeting raw video denoising in such environments remains limited, and publicly available datasets tailored to this setting are largely absent. A key difficulty in denoising is to properly balance noise suppression with the preservation of fine details. Overly ag- gressive denoising tends to erase subtle structures, while insufficient denoising leaves distracting noise, ultimately harming the performance of downstream tasks such as detection and recognition [2]. To bridge these gaps, we introduce a new raw video dataset that provides noisy-clean paired sequences captured under extremely low illu- mination, covering a wide variety of scenes and extended temporal ranges. Alongside this dataset, we develop a lightweight yet effective denoising framework specifically designed for this challenging environment. The proposed model integrates shallow denoising, deformable convolution-based temporal alignment, and spatiotemporal at- tention to suppress noise while maintaining texture fidelity and temporal coherence. Furthermore, we include a texture-aware loss to avoid excessive smoothing and better preserve local detail. Experimental results demonstrate that our approach surpasses existing state-of-the-art methods in PSNR and SSIM on both synthetic and real ex- treme low-light videos, delivering superior noise reduction with minimal artifacts and well-preserved sharpness.
more목차
List of Figures vi
List of Tables vii
초록 i
Abstract iv
I Introduction 1
II Related Work 6
2.1 Video denoising 6
2.2 Image and Video Processing with Raw Data 6
2.3 Noisy Image and Video Datasets in extreme low-light condition 8
III Proposed Method 10
3.1 Extreme Low Light Raw Video Dataset 10
3.1.1 Noisy-Clean Paired Video 10
3.1.2 Calibration Dataset 12
3.1.3 Noisy Surveillance Video 15
3.2 Low Light Video Denoising Pipeline 15
3.2.1 Raw-to-Raw Denoising 16
3.2.2 Noise Modeling 16
3.2.3 Model Architecture 18
3.2.4 Texture Loss 21
IV Experimental Results 25
4.1 Training Details 25
4.2 Comparison with State-of-the-art Methods 27
4.3 Qualitative Results 29
4.4 Ablation Study 30
4.5 Analysis on Surveillance Videos from Other Sensor 30
4.6 Analysis on Public Dataset with Different Conditions 33
V Discussion 33
5.1 Real noise vs Synthetic noise 33
5.2 Limitations and Future Work 36
VI Conclusion 37
Bibliography 38

