HyperAIHyperAI
2 months ago

Wide Activation for Efficient and Accurate Image Super-Resolution

Yu, Jiahui ; Fan, Yuchen ; Yang, Jianchao ; Xu, Ning ; Wang, Zhaowen ; Wang, Xinchao ; Huang, Thomas
Wide Activation for Efficient and Accurate Image Super-Resolution
Abstract

In this report we demonstrate that with same parameters and computationalbudgets, models with wider features before ReLU activation have significantlybetter performance for single image super-resolution (SISR). The resulted SRresidual network has a slim identity mapping pathway with wider ((2\times) to(4\times)) channels before activation in each residual block. To furtherwiden activation ((6\times) to (9\times)) without computational overhead,we introduce linear low-rank convolution into SR networks and achieve evenbetter accuracy-efficiency tradeoffs. In addition, compared with batchnormalization or no normalization, we find training with weight normalizationleads to better accuracy for deep super-resolution networks. Our proposed SRnetwork \textit{WDSR} achieves better results on large-scale DIV2K imagesuper-resolution benchmark in terms of PSNR with same or lower computationalcomplexity. Based on WDSR, our method also won 1st places in NTIRE 2018Challenge on Single Image Super-Resolution in all three realistic tracks.Experiments and ablation studies support the importance of wide activation forimage super-resolution. Code is released at:https://github.com/JiahuiYu/wdsr_ntire2018

Wide Activation for Efficient and Accurate Image Super-Resolution | Latest Papers | HyperAI