Deep robust image deblurring via blur distilling and information comparison in latent space

dc.contributor.authorNiu, Wenjia
dc.contributor.authorZhang, Kaihao
dc.contributor.authorLuo, Wenhan
dc.contributor.authorZhong, Yiran
dc.contributor.authorLi, Hongdong
dc.date.accessioned2023-12-08T04:29:42Z
dc.date.issued2021-09-29
dc.date.updated2022-09-04T08:16:58Z
dc.description.abstractCurrent deep deblurring methods pay main attention to learning a transferring network to transfer synthetic blurred images to clean ones. Though achieving significant performance on the training datasets, they still suffer from a weaker generalization capability from training datasets to others with different synthetic blurs, thus resulting in significantly inferior performance on testing datasets. In order to alleviate this problem, we propose a latent contrastive model, Blur Distilling and Information Reconstruction Networks (BDIRNet), to learn image prior and improve the robustness of deep deblurring. The proposed BDIRNet consists of a blur removing network (DistillNet) and a reconstruction network (RecNet). Two kinds of images with almost the same information but different qualities are input into DistillNet to extract identical structure information via contrast latent information and purify the perturbations from other unimportant information like blur. While the RecNet is utilized to reconstruct sharp images based on the extracted information. In addition, inside the DistillNet and RecNet, a statistical anti-interference distilling (SAID) and anti-interference reconstruction (SAIR) modules are proposed to further enhance the robustness of our methods, respectively. Extensive experiments on different datasets show that the proposed methods achieve improved and robust results compared to recent state-of-the-art methods.en_AU
dc.description.sponsorshipThis work is funded in part by the ARC Centre of Excellence for Robotics Vision (CE140100016), ARC-Discovery (DP190102261) and ARC-LIEF (190100080) grants, as well as a research grant from Baidu on autonomous driving. The authors gratefully acknowledge the GPUs donated by NVIDIA Corporation.en_AU
dc.format.mimetypeapplication/pdfen_AU
dc.identifier.issn0925-2312en_AU
dc.identifier.urihttp://hdl.handle.net/1885/308888
dc.language.isoen_AUen_AU
dc.publisherElsevieren_AU
dc.relationhttp://purl.org/au-research/grants/arc/CE140100016en_AU
dc.relationhttp://purl.org/au-research/grants/arc/DP190102261en_AU
dc.relationhttp://purl.org/au-research/grants/arc/LE190100080en_AU
dc.rights© 2021 Elsevier B.V.en_AU
dc.sourceNeurocomputingen_AU
dc.subjectImage debluren_AU
dc.subjectDeep networken_AU
dc.subjectBlur distillingen_AU
dc.subjectInformation comparisonen_AU
dc.titleDeep robust image deblurring via blur distilling and information comparison in latent spaceen_AU
dc.typeJournal articleen_AU
dcterms.dateAccepted2021-09-09
local.bibliographicCitation.lastpage79en_AU
local.bibliographicCitation.startpage69en_AU
local.contributor.affiliationNiu, Wenjia, College of Engineering and Computer Science, ANUen_AU
local.contributor.affiliationZhang, Kaihao, College of Engineering and Computer Science, ANUen_AU
local.contributor.affiliationLuo, Wenhan, Tencent AI Laben_AU
local.contributor.affiliationZhong, Yiran, College of Engineering and Computer Science, ANUen_AU
local.contributor.affiliationLi, Hongdong, College of Engineering and Computer Science, ANUen_AU
local.contributor.authoremailu1047051@anu.edu.auen_AU
local.contributor.authoruidNiu, Wenjia, u1047051en_AU
local.contributor.authoruidZhang, Kaihao, u6087377en_AU
local.contributor.authoruidZhong, Yiran, u5160496en_AU
local.contributor.authoruidLi, Hongdong, u4056952en_AU
local.description.embargo2099-12-31
local.description.notesImported from ARIESen_AU
local.identifier.absfor460304 - Computer visionen_AU
local.identifier.ariespublicationa383154xPUB22562en_AU
local.identifier.citationvolume466en_AU
local.identifier.doi10.1016/j.neucom.2021.09.019en_AU
local.identifier.scopusID2-s2.0-85115945450
local.identifier.thomsonIDWOS:000702761700007
local.identifier.uidSubmittedBya383154en_AU
local.publisher.urlhttps://www.sciencedirect.com/en_AU
local.type.statusPublished Versionen_AU

Downloads

Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
1-s2.0-S0925231221013771-main.pdf
Size:
5.24 MB
Format:
Adobe Portable Document Format
Description:
Back to topicon-arrow-up-solid
 
APRU
IARU
 
edX
Group of Eight Member

Acknowledgement of Country

The Australian National University acknowledges, celebrates and pays our respects to the Ngunnawal and Ngambri people of the Canberra region and to all First Nations Australians on whose traditional lands we meet and work, and whose cultures are among the oldest continuing cultures in human history.


Contact ANUCopyrightDisclaimerPrivacyFreedom of Information

+61 2 6125 5111 The Australian National University, Canberra

TEQSA Provider ID: PRV12002 (Australian University) CRICOS Provider Code: 00120C ABN: 52 234 063 906