LINR: Revisiting Implicit Neural Representations
in Low-Level Vision
School of Computer Science, University of Birmingham
Abstract
Implicit Neural Representation (INR) has been emerging in computer vision in recent years. It has been shown to be effective in parameterising continuous signals such as dense 3D model from discrete image data, e.g. the neural radius field (NeRF). However, INR is under-explored in 2D image processing tasks. Considering the basic definition and the structure of INR, we are interested in its effectiveness in low-level vision problems such as image restoration.
In this work, we revisit INR and investigate its application in low-level image restoration tasks including image denoising, super-resolution, inpainting, and deblurring. Extensive experimental evaluations suggest the superior performance of INR in several low-level vision tasks with limited resource, outperforming its counterparts by over 2dB.
Results
Performance on the Super-resolution task (4X,500 iter)
Performance on the Denoising task (gaussian noise(25),500 iter)
Performance on Joint training (SR+Inpainting,500 iter)
Performance on Random masked images (Sparsity=0.9,500 iter)
Paper
|
W. Xu, J. Jiao
Revisiting Implicit Neural Representations in Low-Level Vision.
In ICLR 2023 Neural Fields Workshop.
|
Bibtex
@inproceedings{linr,
title={Revisiting Implicit Neural Representations in Low-Level Vision},
author={Wentian Xu and Jianbo Jiao},
booktitle="International Conference on
Learning Representations Workshop",
year={2023},
}
Acknowledgements
The template for this page is based on this cool project page.
|