[论文翻译]ESRGAN: 增强型超分辨率生成对抗网络


原文地址:https://arxiv.org/pdf/1809.00219v2.pdf

代码地址:https://github.com/xinntao/ESRGAN

译者语:最近在老影片复原中常用到的超分辨率算法

中英文对照地址 https://aiqianji.com/blog/article/1
论文中英文对照合集 : https://aiqianji.com/blog/articles

ABSTRACT

The Super-Resolution Generative Adversarial Network (SRGAN) [1] is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied with unpleasant artifacts. To further enhance the visual quality, we thoroughly study three key components of SRGAN – network architecture, adversarial loss and perceptual loss, and improve each of them to derive an Enhanced SRGAN (ESRGAN). In particular, we introduce the Residual-in-Residual Dense Block (RRDB) without batch normalization as the basic network building unit. Moreover, we borrow the idea from relativistic GAN [2] to let the discriminator predict relative realness instead of the absolute value. Finally, we improve the perceptual loss by using the features before activation, which could provide stronger supervision for brightness consistency and texture recovery. Benefiting from these improvements, the proposed ESRGAN achieves consistently better visual quality with more realistic and natural textures than SRGAN and won the first place in the PIRM2018-SR Challenge11We won the first place in region 3 and got the best perceptual index. [3]. The code is available at https://github.com/xinntao/ESRGAN.

In this supplementary file, we first show more examples of Batch-Normalization (BN) related artifacts in Section 6. Then we introduce several useful techniques that facilitate training very deep models in Section 7. The analysis of the influence of different datasets and training patch size is depicted in Section 8 and Section 9, respectively. Finally, in Section 10, we provide more qualitative results for visual comparison.

摘要 Abstract

超分辨率生成对抗网络(SR GAN)[1]是一项开创性的工作,它能够在单图像超分辨率任务中生成逼真的纹理。然而,虚幻的细节通常伴随着令人不快的伪影。
为了进一步提高视觉质量,我们深入研究了SRGAN 网络架构,对抗性损失和感知损失这三个关键组成部分,并对其中每一项都进行了改进,产生了一个增强型SRGAN(ESRGAN)。
特别需要注意的是,我们在没有使用批量标准化的情况下引入RRDB Residual-in-Residual Dense Block作为基本网络构建单元。
此外,我们借用RaGAN [2]的思想来让判别器预测图像的相对真实性而不是图像的绝对真实性。
最后,我们使用激活前的特征来改善感知损失,这可以提供对亮度一致性和纹理恢复更强的监督力。
从这些改进中,所提出的ESRGAN实现了更好的视觉质量,具有比SRGAN更逼真和自然的纹理并且赢得了PIRM2018-SR Challenge1中的第一名[3]。

1 介绍 INTRODUCTION

Single image super-resolution (SISR), as a fundamental low-level vision problem, has attracted increasing attention in the research community and AI companies. SISR aims at recovering a high-resolution (HR) image from a single low-resolution (LR) one. Since the pioneer work of SRCNN proposed by Dong et al. [4], deep convolution neural network (CNN) approaches have brought prosperous development. Various network architecture designs and training strategies have continuously improved the SR performance, especially the Peak Signal-to-Noise Ratio (PSNR) value [5, 6, 7, 1, 8, 9, 10, 11, 12]. However, these PSNR-oriented approaches tend to output over-smoothed results without sufficient high-frequency details, since the PSNR metric fundamentally disagrees with the subjective evaluation of human observers [1].

Several perceptual-driven methods have been proposed to improve the visual quality of SR results. For instance, perceptual loss [13, 14] is proposed to optimize super-resolution model in a feature space instead of pixel space. Generative adversarial network [15] is introduced to SR by [1, 16] to encourage the network to favor solutions that look more like natural images. The semantic image prior is further incorporated to improve recovered texture details [17]. One of the milestones in the way pursuing visually pleasing results is SRGAN [1]. The basic model is built with residual blocks [18] and optimized using perceptual loss in a GAN framework. With all these techniques, SRGAN significantly improves the overall visual quality of reconstruction over PSNR-oriented methods.

单图像超分辨率(SISR)作为一种基本的低级视觉问题,已经引起了研究界和AI公司越来越多的关注。SISR旨在从单个低分辨率(LR)图像中恢复高分辨率(HR)图像。自从Dong[4]等人提出的SRCNN的先驱工作以来,深度卷积神经网络(CNN)方法带来了超分辨率领域的繁荣发展。大家制定了各种网络架构设计和训练策略不断提高SR性能,其中常用的就有峰值信噪比(PSNR)值[5,6,7,1,8,9,10,11,12]。
然而,这些面向PSNR的方法往往会输出过度平滑的结果,而没有足够的高频细节,因为PSNR指标从根本上和人类观察者的主观评价不同[1]。

However, there still exists a clear gap between SRGAN results and the ground-truth (GT) images, as shown in Fig. 1. In this study, we revisit the key components of SRGAN and improve the model in three aspects. First, we improve the network structure by introducing the Residual-in-Residual Dense Block (RDDB), which is of higher capacity and easier to train. We also remove Batch Normalization (BN) [19] layers as in [20] and use residual scaling [21, 20] and smaller initialization to facilitate training a very deep network. Second, we improve the discriminator using Relativistic average GAN (RaGAN) [2], which learns to judge “whether one image is more realistic than the other” rather than “whether one image is real or fake”. Our experiments show that this improvement helps the generator recover more realistic texture details. Third, we propose an improved perceptual loss by using the VGG features before activation instead of after activation as in SRGAN. We empirically find that the adjusted perceptual loss provides sharper edges and more visually pleasing results, as will be shown in Sec. 4.4. Extensive experiments show that the enhanced SRGAN, termed ESRGAN, consistently outperforms state-of-the-art methods in both sharpness and details (see Fig. 1 and Fig. 7).

大家提出了几种感知驱动方法来改善SR结果的视觉质量。例如,提出感知损失[13,14]来优化特征空间中的超分辨率模型而不是像素空间。生成的对抗性网络[15]被[1,16]引入SR任务,以鼓励产生更像自然图像的解决方案。语义图像的先验知识也被进一步合并以改善恢复的纹理细节[17]。
在超分辨率的研究中,SRGAN [1]是一块里程碑,它从视觉效果上来说给出了令人愉悦的结果。基本模型使用残差块[18]构建,并使用GAN框架中的感知损耗进行优化。通过所有这些技术,SRGAN和面向PSNR的方法相比显著提高了重建的整体视觉质量。然而,SRGAN结果与真实数据(GT)图像之间仍存在明显差距,如图1所示。
esrgan_pic1.png

在本研究中,我们重新审视了SRGAN的关键组件,并从三个方面改进了模型。

首先,我们通过引入Residual-in-Residual Dense
Block (RDDB)来改进网络结构,该结构具有更高的容量和更容易训练。
我们还删除了[20]中的批量标准化(BN)[19]层,并使用残差缩放[21,20]和更小的初始化来促进训练非常深的网络。

其次,我们使用相对平均GAN(RaGAN)来改进判别器[2],它学会判断一个图像比另一个图像更真实,而不是“一个图像是真实的还是假的”。我们的实验表明这种改进有助于生成器恢复更真实的纹理细节。

第三,我们建议通过在激活之前使用VGG特征而不是像SRGAN中激活后使用VGG特征来改善感知损失。

0.png

我们凭经验发现调整后的感知损失提供了更清晰的边缘和更加视觉上令人愉悦的结果,如第4.4节所示。大量实验表明,增强型SRGAN,称为ESRGAN,在锐度和细节方面始终优于最先进的方法(见图1和图7)。
我们采用ESRGAN的变体参与PIRM-SR挑战[3]。这一挑战是第一次基于[22]以感知质量意识方式评估表现的SR竞赛,其中作者声称失真和感知质量是相互矛盾。感知质量由non-reference measures of Ma's score[23]和NIQE[24]来判断,即感知指数。较低的感知指数代表更好的感知质量。

如图2所示,感知 - 失真平面被划分为由均方根误差(RMSE)上的阈值定义的三个区域,并且在每个区域中实现最低感知指数的算法成为区域冠军。
我们主要关注区域3,因为我们的目标是将感知质量提升到新的高度。

由于上述改进和Sec4.6中讨论的一些其他调整,我们提出的ESRGAN在PIRM-SR挑战赛(第3区)中以最佳感知指数获得了第一名。我们展示了EDSR [20],RCAN [12]和EnhanceNet [16]以及提交的ESRGAN模型的基线。 蓝点由图像插值产生。

为了平衡视觉质量和RMSE/PSNR,我们进一步提出了网络插值策略,可以不断调整重建风格和平滑度。另一种替代方案是图像插值,其直接逐像素地插入图像。我们采用这种策略参与区域1和区域2。网络插值和图像插值策略及其差异在Sec3.4中讨论。
We take a variant of ESRGAN to participate in the PIRM-SR Challenge [3]. This challenge is the first SR competition that evaluates the performance in a perceptual-quality aware manner based on [22], where the authors claim that distortion and perceptual quality are at odds with each other. The perceptual quality is judged by the non-reference measures of Ma’s score [23] and NIQE [24], i.e., perceptual index = 12((10−Ma)+NIQE). A lower perceptual index represents a better perceptual quality.

As shown in Fig. 2, the perception-distortion plane is divided into three regions defined by thresholds on the Root-Mean-Square Error (RMSE), and the algorithm that achieves the lowest perceptual index in each region becomes the regional champion. We mainly focus on region 3 as we aim to bring the perceptual quality to a new high. Thanks to the aforementioned improvements and some other adjustments as discussed in Sec. 4.6, our proposed ESRGAN won the first place in the PIRM-SR Challenge (region 3) with the best perceptual index.

In order to balance the visual quality and RMSE/PSNR, we further propose the network interpolation strategy, which could continuously adjust the reconstruction style and smoothness. Another alternative is image interpolation, which directly interpolates images pixel by pixel. We employ this strategy to participate in region 1 and region 2. The network interpolation and image interpolation strategies and their differences are discussed in Sec. 3.4.

We focus on deep neural network approaches to solve the SR problem. As a pioneer work, Dong et al. [4, 25] propose SRCNN to learn the mapping from LR to HR images in an end-to-end manner, achieving superior performance against previous works. Later on, the field has witnessed a variety of network architectures, such as a deeper network with residual learning [5], Laplacian pyramid structure [6], residual blocks [1], recursive learning [7, 8], densely connected network [9], deep back projection [10] and residual dense network [11]. Specifically, Lim et al. [20] propose EDSR model by removing unnecessary BN layers in the residual block and expanding the model size, which achieves significant improvement. Zhang et al. [11] propose to use effective residual dense block in SR, and they further explore a deeper network with channel attention [12], achieving the state-of-the-art PSNR performance. Besides supervised learning, other methods like reinforcement learning [26] and unsupervised learning [27] are also introduced to solve general image restoration problems.

我们专注于深度神经网络方法来解决SR问题。作为一项先驱工作,Dong等人[4,25]提出SRCNN以端到端的方式学习从LR到HR图像的映射,与以前的工作相比具有更好的性能。

后来,该领域见证了各种网络架构,如具有残差学习的更深层网络[5],拉普拉斯金字塔结构[6],残差块[1],递归学习[7,8],密集连接网络[ 9],深背投影[10]和残余密集网络[11]。

具体而言,Lim等[20]通过去除残余块中不必要的BN层并扩展模型尺寸来提出EDSR模型,从而实现了显着的改进。
Zhang等[11]提出在SR中使用有效残余密集块,并进一步探索具有信道关注的更深层网络[12],从而实现最先进的PSNR性能。
除了监督学习外,还引入了其他方法,如强化学习[26]和无监督学习[27],以解决一般图像恢复问题。

Several methods have been proposed to stabilize training a very deep model. For instance, residual path is developed to stabilize the training and improve the performance [18, 5, 12]. Residual scaling is first employed by Szegedy et al. [21] and also used in EDSR. For general deep networks, He et al. [28] propose a robust initialization method for VGG-style networks without BN. To facilitate training a deeper network, we develop a compact and effective residual-in-residual dense block, which also helps to improve the perceptual quality by a large margin.

大家已经提出了一些方法可以稳定训练一些非常深的模型。例如,使用残差路径(residual path)以稳定训练并改善性能[18,5,12]。Szegedy等人[21]首先采用残差缩放(Residual scaling ),EDSR也用了这种方法。对于一般的深度网络,He等人[28]提出了一种针对没有BN的VGG型网络的鲁棒初始化方法。

为了便于训练更深的网络,我们开发了一个紧凑而有效的剩余残留密集块,这也有助于提高感知质量。

Perceptual-driven approaches have also been proposed to improve the visual quality of SR results. Based on the idea of being closer to perceptual similarity [29, 14], perceptual loss [13] is proposed to enhance the visual quality by minimizing the error in a feature space instead of pixel space. Contextual loss [30] is developed to generate images with natural image statistics by using an objective that focuses on the feature distribution rather than merely comparing the appearance. Ledig et al. [1] propose SRGAN model that uses perceptual loss and adversarial loss to favor outputs residing on the manifold of natural images. Sajjadi et al. [16] develop a similar approach and further explored the local texture matching loss. Based on these works, Wang et al. [17] propose spatial feature transform to effectively incorporate semantic prior in an image and improve the recovered textures.

除此之外,大家还提出了感知驱动方法来改善SR结果的视觉质量。基于更接近感知相似性的想法[29,14],Johnson提出了感知损失[13],通过最小化特征空间中的误差而不是像素空间来增强视觉质量。
Roey Mechrez[30]通过使用关注特征分布的目标而不是仅仅比较外观,开发了 Contextual loss来生成具有自然图像统计信息的图像。Ledig等[1]提出了SRGAN模型,该模型使用感知损失和对抗性损失来处理自然图像上的输出。
Sajjadi等[16]开发了一种类似的方法,并进一步探索了局部纹理匹配损失。基于这些工作,Wang等人[17]提出了空间特征变换,以有效地将语义先验结合到图像中并改善恢复的纹理。

Throughout the literature, photo-realism is usually attained by adversarial training with GAN [15]. Recently there are a bunch of works that focus on developing more effective GAN frameworks. WGAN [31] proposes to minimize a reasonable and efficient approximation of Wasserstein distance and regularizes discriminator by weight clipping. Other improved regularization for discriminator includes gradient clipping [32] and spectral normalization [33]. Relativistic discriminator [2] is developed not only to increase the probability that generated data are real, but also to simultaneously decrease the probability that real data are real. In this work, we enhance SRGAN by employing a more effective relativistic average GAN.

在整个文献中,照片写实主义通常通过与GAN的对抗训练来实现[15]。最近,有许多工作专注于开发更有效的GAN框架。
WGAN[31]建议最小化Wasserstein距离的合理有效近似值,并通过权重限幅来规范鉴别器。
用于鉴别器的其他改进的正则化包括梯度限幅[32]和谱归一化[33]。
Relativistic discriminator[2]的开发不仅是为了增加生成数据的真实概率,而且还是为了同时降低实际数据是真实的概率。
在这项工作中,我们通过采用更有效的RaGAN来增强SRGAN。

SR algorithms are typically evaluated by several widely used distortion measures, e.g., PSNR and SSIM. However, these metrics fundamentally disagree with the subjective evaluation of human observers [1]. Non-reference measures are used for perceptual quality evaluation, including Ma’s score [23] and NIQE [24], both of which are used to calculate the perceptual index in the PIRM-SR Challenge [3]. In a recent study, Blau et al. [22] find that the distortion and perceptual quality are at odds with each other.

SR算法通常通过几种广泛使用的失真度量来评估,例如PSNR和SSIM。
然而,这些指标从根本上不同意人类观察者的主观评价[1]。非参考测量用于感知质量评估,包括Ma's score[23]和NIQE [24],两者都用于计算PIRM-SR挑战中的感知指数[3]。
在最近的一项研究中,Blau等[22]发现扭曲和感知质量彼此不一致。

3 算法提出 Proposed Methods

Our main aim is to improve the overall perceptual quality for SR. In this section, we first describe our proposed network architecture and then discuss the improvements from the discriminator and perceptual loss. At last, we describe the network interpolation strategy for balancing perceptual quality and PSNR.

我们的主要目标是提高SR的总体感知质量。在这部分,我们首先描述我们提出的网络结构,
然后讨论判别器和感知损失的改进。最后,我们描述为了平衡感知质量和PSNR的网络插值策略。

图3:我们采用SRResNet [1]的基本架构,其中大部分计算在LR特征空间中完成

我们可以选择或设计“基本块”(例如,残余块[18],密集块[34],RRDB)以获得更好的性能

3.1 网络结构

In order to further improve the recovered image quality of SRGAN, we mainly make two modifications to the structure of generator G: 1) remove all BN layers; 2) replace the original basic block with the proposed Residual-in-Residual Dense Block (RRDB), which combines multi-level residual network and dense connections as depicted in Fig. 4.

为了进一步提高SRGAN恢复图像质量,我们主要为生成器G的结构做了两个改进:

  1. 去除掉所有的BN层。
  2. 提出用残差密集块(RRDB)代替原始基础块,其结合了多层残差网络和密集连接,如图4所示。

图4

左:我们去除SRGAN中残余块中的BN层。 右:在我们的深层模型中使用RRDB块,β是残差缩放参数。

Removing BN layers has proven to increase performance and reduce computational complexity in different PSNR-oriented tasks including SR [20] and deblurring [35]. BN layers normalize the features using mean and variance in a batch during training and use estimated mean and variance of the whole training dataset during testing. When the statistics of training and testing datasets differ a lot, BN layers tend to introduce unpleasant artifacts and limit the generalization ability. We empirically observe that BN layers are more likely to bring artifacts when the network is deeper and trained under a GAN framework. These artifacts occasionally appear among iterations and different settings, violating the needs for a stable performance over training. We therefore remove BN layers for stable training and consistent performance. Furthermore, removing BN layers helps to improve generalization ability and to reduce computational complexity and memory usage.

去除BN层已经被证明在不同的PSNR-oriented任务有助于增强性能和减少计算复杂度,包括SR[20]和去模糊[35]。BN层在训练期间使用批次的均值和方差对特征进行归一化,在测试期间使用整个训练数据集的估计均值和方差。

当训练和测试数据集的统计数据差异很大时,BN层往往引入不适的伪影,限制了泛化能力。我们以经验观察到,BN层有可能当网络深和在GAN网络下训练时带来伪影。这些伪影偶尔出现在迭代和不同设置之间,违反了稳定性能超过训练的需求。因此,我们为了训练稳定和一致性去除了BN层。此外,去除BN层有助于提高泛化能力,减少计算复杂度和内存使用。

We keep the high-level architecture design of SRGAN (see Fig. 3), and use a novel basic block namely RRDB as depicted in Fig. 4. Based on the observation that more layers and connections could always boost performance [20, 11, 12], the proposed RRDB employs a deeper and more complex structure than the original residual block in SRGAN. Specifically, as shown in Fig. 4, the proposed RRDB has a residual-in-residual structure, where residual learning is used in different levels. A similar network structure is proposed in [36] that also applies a multi-level residual network. However, our RRDB differs from [36] in that we use dense block [34] in the main path as [11], where the network capacity becomes higher benefiting from the dense connections.

我们保持SRGAN的高级架构设计(参见图3),并使用如图4所示的新的基本块即RRDB。根据观察,更多的层和连接总是能提高性能[20,11,12],所提出的RRDB采用比SRGAN原始残差块更深层和更复杂的结构。

特别的是,如图4所示,所提出了RRDB拥有residual-in-residual结构,在不同的层都可以进行残差学习。相似的网络结构在[36]被提出,同样也用了多层残差网络。然而,我们的RRDB不同于[36],我们在主要路径中[11]使用了密集块[34],其中网络容量因为密集连接变得更高。

In addition to the improved architecture, we also exploit several techniques to facilitate training a very deep network: 1) residual scaling [21, 20], i.e., scaling down the residuals by multiplying a constant between 0 and 1 before adding them to the main path to prevent instability; 2) smaller initialization, as we empirically find residual architecture is easier to train when the initial parameter variance becomes smaller. More discussion can be found in the supplementary material.

为了提升结构,我们也探索了几种方法来促进训练一个非常深的网络:

  1. 残差缩放[21,20],即将残差乘以0和1之间的常数,然后将它们添加到主路径以防止不稳定。
  2. 较小初始化,我们经验发现,当初始参数方差变小时,残差结构更容易训练。更多的讨论在supplementary material。

训练的细节和网络的有效性在第4节。

The training details and the effectiveness of the proposed network will be presented in Sec. 4.

3.2 相对判别器

Besides the improved structure of generator, we also enhance the discriminator based on the Relativistic GAN [2]. Different from the standard discriminator D in SRGAN, which estimates the probability that one input image x is real and natural, a relativistic discriminator tries to predict the probability that a real image xr is relatively more realistic than a fake one xf, as shown in Fig. 5.

除了改进生成器的结构,我们还增强了判别器,基于RaGAN[2]。

不同于SRGAN标准的判别器D,估算一个输入图像x真实和自然的可能性,如图5所示,相对判别器试图预测真实图像x_{r}比假图像x_{f}更真实的概率。

图5:标准鉴别器和相对判别器之间的区别标准鉴别器和相对判别器之间的区别

特别地,我们用相对平均判别器RaD代替标准判别器,表示为2.png

标准判别器在SRGAN可以表示为3.png,其中4.png是sigmoid函数,5.png是非变换判别器输出。

RaD可以用公式表示为6.png,其中7.png表示在mini批处理中对所有假数据取平均值的操作。
判别器损失可以定义为:
8.png

生成器的对抗损失是以一种对称的形式:
9.png

其中10.png代表LR图像输入。

生成器的对抗损失包含了11.png12.png
因此,我们的生成器优势适合于对抗训练中生成的数据和实际数据的渐变,而在SRGAN中,只有生成的部分生效。在4.4节中,我们展示判别器模态化帮助学习更清晰的边缘和更细腻的纹理。

3.3 感知损失

We also develop a more effective perceptual loss Lpercep by constraining on features before activation rather than after activation as practiced in SRGAN.

Based on the idea of being closer to perceptual similarity [29, 14], Johnson et al. [13] propose perceptual loss and it is extended in SRGAN [1]. Perceptual loss is previously defined on the activation layers of a pre-trained deep network, where the distance between two activated features is minimized. Contrary to the convention, we propose to use features before the activation layers, which will overcome two drawbacks of the original design. First, the activated features are very sparse, especially after a very deep network, as depicted in Fig. 6. For example, the average percentage of activated neurons for image ‘baboon’ after VGG19-5444We use pre-trained 19-layer VGG network[37], where 54 indicates features obtained by the 4th convolution before the 5th maxpooling layer, representing high-level features and similarly, 22 represents low-level features. layer is merely 11.17%. The sparse activation provides weak supervision and thus leads to inferior performance. Second, using features after activation also causes inconsistent reconstructed brightness compared with the ground-truth image, which we will show in Sec. 4.4.

我们也开发了一种更有效的感知损失13.png,在SRGAN通过激活前约束特征而不是激活后。
与这个想法相近的感知相似[29,14],Johnson等人[13]提出感知损失并且在SRGAN中扩展。
感知损失先前定义在预先训练的深层网络的激活层上,其中两个激活特征之间的距离被最小化。

与常规相反,我们提出在激活层之前使用特性,这将克服原始设计的两个缺点。

第一,被激活的特征是非常稀疏的,特别是在非常深的网络之后,如图6所示。

图6
图6:激活图像'狒狒'之前和之后的代表性特征图。

随着网络的深入,激活后的大多数功能变为非活动状态,而激活前的功能包含更多信息。例如,图像“baboon”激活神经元的平均百分率在VGG19-54层后仅为11.17%(我们使用预先训练的19层VGG网络[37],其中54表示$5^{th}最大池化层之前通过$4^{th}卷积获得的特征,表示高级特征,类似地,22表示低级特征)。
稀疏激活提供弱的监督,从而导致性能较差。

第二,使用激活后的特征也会造成重建后的图像亮度与真实图像不一致,这点我们将在4.4节展示。

因此,生成器完整的损失可以表示为:

14.png

其中15.png是评估恢复图像16.png和真实图像y之间的1-范数距离内容损失,17.png是平衡不同损失项的系数。

在PIRM-SR挑战赛中我们也探索了一种感知损失的变体。与常用的采用VGG网络进行图像分类训练的感知损失相比,我们提出了一种更适合于超分辨率的感知损失SR-MINC loss。这个是基于一个材料识别的VGG网络模型进行微调得来的,因此它更关注纹理而不是目标物体。
虽然MINC损失带来的增益是微乎其微的,但我们仍然认为探索以纹理为中心的感知损失对于SR是至关重要的。

3.4 网络插值

为了去掉在GAN-based方法中令人不愉快的噪声,同时保证好的感知质量,我们提出一个灵活有效的策略,即网络插值。特别的是,我首先训练一个PSNR-oriented网络G_{PSNR},然后通过微调获得GAN-based网络G_{GAN}

我们对这两个网络的所有相应参数进行插值,得到一个插值模型G_{INTERP},其参数可以表示为:
18.png

其中19.png20.pngG_{INTERP}G_{PSNR}G_{GAN}的参数;21.png是插值参数。

网络插值有两个优点。

第一,首先,插值模型能够在不引入伪影的情况下对任何可行的\alpha产生有意义的结果。

第二,我们可以在不重新训练模型的情况下,持续地平衡感知质量和感觉。

我们还探索一些替代的方法来平衡以PSNR为导向和基于GAN的方法的效果。例如,一个可以直接对图像输出进行插值(像素到像素)而不是网络插值。然而,这样的方法无法在噪声和模糊之间实现良好的权衡,即插值图像要么太模糊,要么带有噪声(见4.5节)。其它方法是为内容损失和对抗损失调整权值,即参数22.png23.png在公式(3)。但是这种方法需要调整损失权重并微调网络,因此实现图像样式的连续控制成本太高。

4 实验

4.1 实验细节

在SRGAN [1]之后,所有实验都在LR和HR图像之间以×4的比例因子进行。我们通过使用MATLAB双三次核函数对HR图像进行下采样来获得LR图像。mini-batch大小设置为16。裁剪的HR patch的空间大小为128×128。我们观察到,更大的补丁适合训练更深的网络,因为扩大的感受野有助于捕获更多的语义信息。但是,它需要更多的训练时间并消耗更多的计算资源。在面向PSNR的方法中也观察到这种现象(参见补充材料)。

训练过程分为两个阶段。

首先,我们训练一个具有L1损失的面向PSNR的模型。学习率初始化为$2×10^{-4},每$2×10^5个mini-batch更新后衰减为1/2。然后训练一个PSNR-oriented的模型作为生成器的初始化。
使用等式(3)中的损失函数训练生成器,λ= $5×10^{-3}且η= $1×10^{-2}。学习率设置为$1×10^{-4}$,并在[50k,100k,200k,300k]迭代时减半。

使用pixel-wise loss的预训练有助于基于GAN的方法视觉上获得更加令人愉悦的结果。
原因是

  1. 它可以避免生成器的不希望的局部最优;
  2. 在预训练之后,鉴别器在一开始就接收相对好的超分辨图像而不是极端假图像(黑色或噪声图像),这有助于它更多地关注纹理辨别。

为了优化,我们使用Adam [39],其中β1= 0.9,β2= 0.999。我们交替更新生成器和鉴别器网络,直到模型收敛。
我们为我们的生成器使用两个设置:其中一个包含16个残差块,容量类似于SRGAN,另一个是具有23个RRDB块的更深层模型。我们使用PyTorch框架实现我们的模型,并使用NVIDIA Titan Xp GPU进行训练。

4.2 数据 Data

For training, we mainly use the DIV2K dataset [40], which is a high-quality (2K resolution) dataset for image restoration tasks. Beyond the training set of DIV2K that contains 800 images, we also seek for other datasets with rich and diverse textures for our training. To this end, we further use the Flickr2K dataset [41] consisting of 2650 2K high-resolution images collected on the Flickr website, and the OutdoorSceneTraining (OST) [17] dataset to enrich our training set. We empirically find that using this large dataset with richer textures helps the generator to produce more natural results, as shown in Fig. 9.

We train our models in RGB channels and augment the training dataset with random horizontal flips and 90 degree rotations. We evaluate our models on widely used benchmark datasets – Set5 [42], Set14 [43], BSD100 [44], Urban100 [45], and the PIRM self-validation dataset that is provided in the PIRM-SR Challenge.

对于训练,我们主要使用DIV2K数据集[40],这是用于图像恢复任务的高质量(2Kresolution)数据集。除了包含800个图像的DIV2K训练集之外,我们还为我们的训练寻找具有丰富多样纹理的其他数据集。为此,我们进一步使用Flickr2K数据集[41],包括在Flickr网站上收集的2650个2K高分辨率图像,以及室外场景训练(OST)[17]数据集,以丰富我们的训练集。

我们凭经验找到的具有更丰富纹理的大型数据集有助于生成器产生更自然的结果,如图8所示。我们在RGB通道中训练我们的模型,并使用随机水平翻转和90度旋转来增强训练数据集。我们在广泛使用的基准数据集上评估我们的模型 - Set5 [42],Set14 [43],BSD100 [44],Urban100 [45]以及PIRM-SR挑战中提供的PIRM自我验证数据集。

4.3定性结果

我们将几个公共基准数据集的最终模型与最先进的面向PSNR的方法(包括SRCNN [4],EDSR [20]和RCAN [12])以及包括SRGAN[1]和EnhanceNet[16]在内的感知驱动方法进行比较。

由于感知质量没有有效和标准的度量标准,我们在图7中提供了一些有代表性的定性结果。PSNR(在YCbCr颜色空间中的亮度通道上评估)和PIRM-SR挑战中使用的感知指数也提供给参考。

例如,与之前的方法相比,ESRGAN可以产生更清晰,更自然的狒狒胡须和草纹理(见图像43074),而之前面向PSNR的方法往往会产生模糊的结果,基于GAN的方法,其纹理不自然且包含令人不快的噪声。

ESRGAN能够中生成更详细的建筑结构(参见图像102061),而其他方法无法生成足够的细节(SRGAN)或者会添加不需要的纹理(增强网络)。此外,先前基于GAN的方法有时会引入令人不愉快的伪影,例如,SRGAN会增加面部皱纹。我们的ESRGAN摆脱了这些,并产生了自然的效果。

图7
图7:ESRGAN的定性结果。

ESRGAN产生更多的自然纹理,例如动物毛皮,建筑结构和草纹理,以及不那么令人不愉快的文物,例如SRGAN在脸上的人工制品。从图7中可以看出,我们提出的ESRGAN在锐度和细节方面都优于以前的方法。

4.4消融研究

为了研究我们提议的ESRGAN中每个组件的效果,我们逐渐修改基线SRGAN模型并比较它们的差异。
整体视觉比较如图8所示。每列代表一个模型,其配置显示在顶部。红色标志表示与之前型号相比的主要改进。

图8
图8:用于显示ESRGAN中每种成分的影响的总体视觉比较。每列代表一个模型,其配置位于顶部。 红色标志表示与之前型号相比的主要改进

详细讨论如下。

BN删除
我们首先删除所有BN层,以获得稳定且一致的性能,而不会产生伪影。它不会降低性能,但会节省计算资源和内存使用量。对于某些情况,从图8中的第2和第3列可以观察到略微改善(例如,图像39)。此外,我们观察到当网络更深且更复杂时,具有BN层的模型在恢复文物时,会更可能产生令人不舒服的效果。这些例子可以在补充材料中找到。

在感知损失激活之前
我们首先证明在激活之前使用特征可以导致重建图像的更准确的亮度。为了消除纹理和颜色的影响,我们使用高斯核过滤图像并绘制其灰度对应的直方图。图9显示了每个亮度值的分布。

图9

图9:激活前和激活后的比较

使用激活的特征使分布向左倾斜,同时在激活之前使用特征,导致调光器输出更接近ground-truth的更准确的亮度分布。

我们可以进一步观察到,在激活之前使用特征有助于产生更清晰的边缘和更丰富的纹理,如图9b(见鸟羽)和图8(见第3和第4列)所示,因为激活之前的密集特征与稀疏激活后的特征相比,可以提供更强的监督

RaGAN
RaGAN使用改进的相对判别器,这有助于学习更清晰的边缘和更细致的纹理。
例如,在图8的第5列中,生成的图像比其左边的图像更清晰,纹理更丰富(参见狒狒,图像39和图像43074)。

使用RRDB的更深的网络
具有所提出的RRDB的更深的模型可以进一步改善恢复的纹理,尤其是对于诸如图8中的图像6的屋顶的常规结构,因为深度模型具有捕获语义信息的强表示能力。此外,我们发现更深的模型可以减少令人不快的噪音,如图8中的图像20。

与SRGAN相比,SRGAN声称更深层次的模型越来越难以训练,由于上面提到的改进特别是没有BN层的RRDB,我们更深层次的模型显示了其优越的性能和易于训练。

4.5网络插值

我们比较了网络插值和图像插值策略在平衡面向PSNR的模型和基于GAN的方法的结果时的效果。
我们对这两种方案应用简单的线性插值。插值参数α从0到1中选择,间隔为0.2。

如图10所示,基于纯GAN的方法产生锐利边缘和更丰富的纹理,但具有一些令人不愉快的伪像,而纯PSNR导向方法输出卡通风格的模糊图像。

图10
图10:网络插值和图像插值之间的比较。

通过采用网络插值,在保持纹理的同时减少了令人不快的伪像。相比之下,图像插值无法有效地去除这些伪像。有趣的是,观察到网络插值策略提供了对图10中的平衡感知质量和保真度的平滑控制。

4.6 PIRM-SR挑战赛

我们采用ESRGAN的变体参与PIRM-SR挑战[3]。具体而言,我们使用提议的ESRGAN和16个残差块,并根据经验进行一些修改以满足感知指数。

  1. MINC损失用作感知损失的变体,如第3.3节中所述。尽管感知指数略有增加,但我们仍然认为,探索关注纹理的感知损失对SR至关重要;
  2. 原始数据集[24],用于学习感知指数,也用于我们的训练;
  3. 由于PSNR约束,使用高达η= 10的损失L1;
  4. 我们还使用反投影[46]作为后处理,这可以提高PSNR,有时可以降低感知指数。

对于需要更高PSNR的其他区域1和2,我们在ESRGAN的结果和面向PSNR的方法RCAN [12]的结果之间使用图像插值。尽管我们通过使用网络插值方案观察到更加视觉上令人愉悦的结果,但是图像插值方案实现了更低的感知指数(越低越好)。

我们提出的ESRGAN模型在具有最佳感知指数的PIRM-SR挑战赛(第3区)中获得第一名。

5 结论

我们提出了一种ESRGAN模型,该模型比以前的SR方法具有更好的感知质量。
该方法在感知指数方面在PIRM-SR挑战赛中获得第一名。
我们已经制定了一个新颖的架构,其中包含几个没有BN层的RDDB块。
此外,采用包括残差缩放和较小初始化的有用技术来促进所提出的深度模型的训练。
我们还介绍了使用RaGAN作为鉴别器,它学会判断一个图像是否比另一个更真实,引导生成器恢复更详细的纹理。
此外,我们通过使用激活前的功能增强了感知损失,这提供了更强的监督,从而恢复更准确的亮度和逼真的纹理。

参考: 机翻
参考: https://blog.csdn.net/qq_36556893/article/details/86418149

Xintao Wang1, Ke Yu1, Shixiang Wu2, Jinjin Gu3, Yihao Liu4,
Chao Dong2, Chen Change Loy5, Yu Qiao2, Xiaoou Tang1
ECCV 2018
https://arxiv.org/pdf/1809.00219v2.pdf

校对: 丫丫是只小狐狸