CARN: Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network
> 译者语:一个轻量级好用的超分辨率算法
> 中英文对照地址 https://aiqianji.com/blog/article/7
> 论文中英文对照合集 : https://aiqianji.com/blog/articles
摘要 Abstract
近年来,深度学习方法已经成功地应用于单图像超分辨率任务。尽管深度学习方法有很好的性能,但由于计算量大的要求,它们很难适用于实际任务。
本文通过提出一种精确、轻量级的图像超分辨率深度网络来解决这一问题。
详细地说,我们在残差网络上设计了一个级联机制的架构。我们也展示了多个不同的级联残差模型来验证算法的有效性。大量实验表明,即使用很少的参数和操作,我们的模型也能达到与最先进的方法相当的性能。
关键词:超分辨率,深度卷积神经网络
ABSTRACT
In recent years, deep learning methods have been successfully applied to single-image super-resolution tasks. Despite their great performances, deep learning methods cannot be easily applied to real-world applications due to the requirement of heavy computation. In this paper, we address this issue by proposing an accurate and lightweight deep learning model for image super-resolution. In detail, we design an architecture that implements a cascading mechanism upon a residual network. We also present a variant model of the proposed cascading residual network to further improve efficiency. Our extensive experiments show that even with much fewer parameters and operations, our models achieve performance comparable to that of state-of-the-art methods.
Keywords:
Super-Resolution, Deep Convolutional Neural Network
I. 介绍 Introduction
Super-resolution (SR) is a computer vision task that reconstructs a high-resolution (HR) image from a low-resolution (LR) image. Specifically, we are concerned with single image super-resolution (SISR), which performs SR using a single LR image. SISR is generally difficult to achieve due to the fact that computing the HR image from an LR image is a many-to-one mapping. Despite such difficulty, SISR is a very active area because it can offer the promise of overcoming resolution limitations, and could be used in a variety applications such as video streaming or surveillance system.
Recently, convolutional neural network-based(CNN-based) methods have provided outstanding performance in SISR tasks[srcnn2014, vdsr2016, lapsrn2017]. From the SRCNN[srcnn2014] that has three convolutional layers to MDSR[mdsr2017] that has more than 160 layers, the depth of the network and the overall performance have dramatically grown over time. However, even though deep learning methods increase the quality of the SR images, they are not suitable for real-world scenarios. From this point of view, it is important to design lightweight deep learning models that are practical for real-world applications. One way to build a lean model is reducing the number of parameters. There are many ways to achieve this[han2015deep, squeezenet], but the most simple and effective approach is to use a recursive network. For example, DRCN[drcn2016] uses a recursive network to reduce redundant parameters, and DRRN[drnn2017] improves DRCN by adding a residual architecture to it. These models decrease the number of model parameters effectively when compared to the standard CNN and show good performance. However, there are two downsides to these models: 1) They first upsample the input image before feeding it to the CNN model, and 2) they increase the depth or the width of the network to compensate for the loss due to using a recursive network. These points enable the model to maintain the details of the image when reconstructed, but at the expense of increased number of operations and inference time.
超分辨率是一项计算机视觉任务,它从低分辨率图像重建高分辨率图像。具体来说,我们关注的是单图像超分辨率(SISR),它使用单个低分辨率图像来执行超分辨率。SISR通常很难实现,因为从单个低分辨率图像计算单个高分辨率图像是一个一对多的映射(一个像素点到多个像素点)。尽管有这样的困难,但SISR的研究仍是非常活跃,因为它有望克服分辨率的局限,并可用于各种应用,如视频流或监控系统。
最近,基于卷积神经网络的方法在SISR任务中表现突出[6,20,24]。从有三个卷积层的SRCNN [6]到有160多层的MDSR [26],网络的深度和整体性能随着时间的推移而显著增长。然而,即使深度学习方法提高了超分辨率图像的质量,它们也不适合真实世界的场景。从这个角度来看,设计适用于现实世界应用的轻量级深度学习模型非常重要。建立轻量级模型的一种方法是减少参数数量。实现这一点有许多方法[11,19],但最简单有效的方法是使用递归网络(recursive network)。例如,DRCN [21]使用递归网络来减少冗余参数,而DRRN [35]通过增加残差结构来改进DRCN。与标准CNN相比,这些模型有效地减少了模型参数的数量,并且表现出良好的性能。然而,这些模型有两个缺点:1)在将输入图像送到CNN模型之前,它们首先对输入图像进行上采样,2)它们增加了网络的深度或宽度,以补偿由于使用递归网络而造成的损失。这些点使模型能够在重建时保持图像的细节,但代价是操作数量和推理时间的增加。
Most of the works that aim to build a lean model focused primarily on reducing the number of parameters. However, as mentioned above, the number of operations is also an important factor to consider in real-world scenarios. Consider a situation where an SR system operates on a mobile device. Then, the execution speed of the system is also of crucial importance from a user-experience perspective. Especially the battery capacity, which is heavily dependent on the amount of computation performed, becomes a major problem. In this respect, reducing the number of operations in the deep learning architectures is a challenging and necessary step that has largely been ignored until now. Another scenario relates to applying SR methods on video streaming services. The demand for streaming media has skyrocketed, and hence requires large storage to store massive multimedia data. It is therefore imperative to compress data using lossy compression techniques before storing. Then, an SR technique can be applied to restore the data to the original resolution. However, because latency is the most critical factor in streaming services, the decompression process (i.e., super-resolution) has to be performed in real-time. To do so, it is essential to make the SR methods lightweight in terms of the number of operations.
大多数旨在建立精益模型的工作主要集中在减少参数的数量上。然而,如上所述,运算操作的数量也是现实场景中要考虑的一个重要因素。考虑在移动设备上运行超分辨率算法的情况,从用户体验的角度来看,系统的执行速度也是至关重要的。还有耗电量,这是严重依赖于算法计算量的,这也将成为一个主要问题。在这方面,减少算法中的运算量是一个具有挑战性和必要的步骤,但迄今为止,这一点在很大程度上被忽略了。另一种情况涉及将超分辨率算法应用于视频流服务。对流媒体的需求激增,因此需要大容量存储来存储大量的多媒体数据。因此,在存储之前,必须使用有损压缩技术来压缩数据。然后,可以应用超分辨率技术将数据恢复到原始分辨率。因为延迟是关键的因素,所以解压缩过程(即超分辨率)必须实时执行。为了做到这一点,减少计算量,使超分辨率方法变得轻量级是至关重要的。
To handle these requirements and improve the recent models, we propose a Cascading residual network (CARN) and its variant CARN-Mobile (CARN-M). We first build our CARN model to increase the performance, and extend it to CARN-M to optimize it for speed and the number of operations. Following the FSRCNN[fsrcnn2016], CARN and CARN-M take the LR images and compute the HR counterparts as the output of the network. The middle parts of our models are designed based on the ResNet[resnet]. The ResNet architecture has been widely used in deep learning-based SR methods[drnn2017, mdsr2017] because of the ease of training and superior performance. In addition to the ResNet architecture, CARN uses a cascading mechanism at both the local and the global level to incorporate the features from multiple layers. This has the effect of reflecting various levels of input representations in order to receive more information. In addition to the CARN model, we also provide the CARN-M model that allows the designer to tune the trade-off between the performance and the heaviness of the model. It does so by means of efficient residual block (residual-E) and recursive network architecture, which we describe in more detail in Section 3.
为了处理这些需求和改进最近的模型,我们提出了一个级联残差网络(CARN)及其变种CARN-Mobile (CARN-M)。我们首先构建我们的CARN模型来提高性能,并将其扩展到CARN-M,以优化速度和操作数量。和FSRCNN [7]一样,CARN系列模型拍摄低分辨率图像并通过网络输出高分辨率图像。我们模型的中间部分是基于ResNet设计的[13]。由于易于训练和卓越的性能,ResNet体系结构已被广泛使用[26,35]。除了ResNet架构之外,CARN还在本地和全局级别使用级联机制来整合来自多个层的功能。这具有反映不同级别的输入表示以便接收更多信息的效果。除了CARN模型,我们还提供了CARN-M模型,允许设计师在模型的性能和大小之间进行权衡。它是通过 residual-E模块和递归网络结构来实现的,我们将在第3节中详细描述。
In summary, our main contributions are as follows: 1) We propose CARN, a neural network for SR based on the cascading modules, which achieves high performance. Our main modules, which we call the cascading modules, effectively boost the performance via multi-level representation and multiple shortcut connections. 2) We also propose CARN-M for efficient SR by combining the efficient residual block and the recursive network scheme. 3) We show through extensive experiments, that our model uses only a modest number of operations and parameters to achieve competitive results. Our CARN-M, which is the more lightweight SR model, shows comparable results to others with much fewer operations (Fig. 1).
总之,我们的主要贡献如下:
1)提出了一种基于级联模块的神经网络CARN,它在超分辨率任务上取得了较高的性能(图1)。我们的级联模块通过多级表示和多条shortcut连接有效提升性能。
2)结合residual-E模块和递归网络方案,提出了有效的超分辨率算法:CARN-M。
3)我们通过广泛的实验表明,我们的模型仅使用适度数量的运算操作和参数来实现有竞争力的结果。我们的CARN-M是更轻量级的SR模型,运算操作更少,而效果显示出与其他模型相当的结果。
II. 相关工作 Related Work
Since the success of AlexNet[alexnet] in image recognition task[imagenet2009], various deep learning approaches have been applied to many computer vision tasks[ssd, fasterrcnn, deconvnet, color]. The SISR task is one such task, and we present an overview of the deep learning-based SISR in section 2.1.
Another area we deal with in this paper is model compression. Recent deep learning models focus on squeezing model parameters and operations for application in low-power computing devices, which has many practical benefits in real-world applications. We briefly review this area in section 2.2.
自从AlexNet [23]在图像识别任务[5]中取得成功以来,许多深度学习方法已经被应用于不同的计算机视觉任务[9,27,30,40]。单图超分辨率SISR就是这样一个任务,我们在第2.1节中概述了基于深度学习的SISR。我们在本文中处理的另一个领域是模型压缩。最近的研究集中于压缩模型参数和操作,以便在低功率计算设备中应用,这在现实世界的应用中具有许多实际好处。我们在第2.2节简要回顾。
2.1基于深度学习的图像超分辨率 DEEP LEARNING BASED IMAGE SUPER-RESOLUTION
Recently, deep learning based models have shown dramatic improvements in the SISR task. Dong et al.[srcnn2014] first proposed a deep learning-based SR method, called SRCNN, which outperformed traditional algorithms. However, SRCNN has a large number of operations compared to its depth, as input images are upsampled before being fed into the network. Taking a different approach from SRCNN, FSRCNN[fsrcnn2016] and ESPCN[espcn2016] upsample images at the end of the networks. This latter approach leads to the reduction in the number of operations compared to the former. However, the overall performance could be degraded if there are not enough layers after the upsampling layer. Moreover, they cannot manage multi-scale training, as the input image size differs for each upsampling scale.
Despite the fact that the power of deep learning comes from deep layers, the aforementioned methods have settled for shallow layers because of the difficulty in training. To better harness the depth of deep learning models, Kim et al.[vdsr2016] proposed VDSR, which uses residual learning to map the LR images x to their residual images r. Then, VDSR produces the SR images y by adding the residual back into the original, i.e., y=x+r.
最近,基于深度学习的模型在SISR任务中显示出显著的改进。Dong et al. [6]首次提出了一种基于深度学习的超分辨率方法:SRCNN,其性能优于传统算法。然而,由于网络将向上采样的图像作为输入,导致SRCNN网络深度不深,却有大量的计算操作。采用与SRCNN不同,FSRCNN [7]和ESPCN [33]在网络末端再进行上采样操作。通过这样做,与SRCNN相比,它减少了运算操作次数。但是,如果在上采样过程后没有足够的层,整体性能可能会下降。此外,它们不能处理多尺度训练,因为输入图像在进行上采样后,尺寸不同。
两个模型都有一个低分辨率图像,并在网络末端向上采样到高分辨率。在CARN模型中,每个残差模块被改变为级联模块。蓝色箭头表示全局级联连接。
On the other hand, LapSRN[lapsrn2017] uses a Laplacian pyramid architecture to increase the image size gradually. By doing so, LapSRN effectively performs SR on extremely low-resolution cases with fewer number of operations compared to VDSR. The main difference is that VDSR upsamples the image at the beginning whereas LapSRN does so sequentially.
Another issue of DL-based SR is how to reduce the parameters and operation. For example, DRCN[drcn2016] uses a recursive network to reduce parameters by engaging in redundant usages of a small number of parameters. DRRN[drnn2017] improves upon DRCN by combining the recursive and residual network schemes to achieve better performance with less parameters. However, DRCN and DRRN use very deep networks to compensate for the loss of performance and hence these models require heavy computing resources. Hence, we aim to build a model that is lightweight in both size and computation. We will briefly discuss previous works that address such model efficiency issues in the following section.
尽管事实上,深层的深度学习网络才能发挥作用,但由于训练困难,上述方法都是在进行浅层的学习。为了更好地增加网络的深度,Kim等人[20]提出了VDSR,它使用残差学习将LR图像x映射到它们的残差图像r。然后,VDSR通过将残差加上原始图像来产生SR图像y,即y = x + r。另一方面,LapSRN [24]使用拉普拉斯金字塔架构来逐渐增加图像大小。通过这样做,LapSRN可以在极低分辨率的情况下有效地执行算法,并且与VDSR相比,计算操作数量更少。
基于深度学习的超分辨率算法研究的另一个问题是如何减少参数和计算操作。例如,DRCN [21]使用递归网络,通过少量参数的冗余使用来减少参数。DRRN [35]通过结合递归和残差网络方案来改善DRCN,以更少的参数实现更好的性能。然而,DRCN和DRRN使用非常深的网络来补偿性能损失,因此需要大量的计算资源。因此,我们的目标是建立一个在尺寸和计算上都很轻的模型。在下一节中,我们将简要讨论以前解决这种模型效率问题的工作。
2.2高效的神经网络 EFFICIENT NEURAL NETWORK
Lately, there has been rising interest in building small and efficient neural networks[squeezenet, han2015deep, mobilenets]. These approaches can be categorized into two groups: 1) Compressing pretrained networks, and 2) designing small but efficient models. Han et al.[han2015deep] proposed deep compressing techniques, which consist of pruning, vector quantization, and Huffman coding to reduce the size of a pretrained network. In the latter category, SqueezeNet[squeezenet] builds an AlexNet-based architecture and achieves nearly the same performance level with 50× fewer parameters compared to AlexNet. MobileNet[mobilenets] builds an efficient neural network by applying depthwise separable convolution introduced in Sifre et al.[dwconv]. With depthwise separable convolution, it is easy to build lightweight, deep neural networks, which allow end-users to choose the appropriate network size based on the application constraints. Because of this simplicity, we also apply this technique in the residual block with some modification to achieve a lean neural network.
人们对建立小型高效的神经网络越来越感兴趣[11,16,19]。这些方法可以分为两组:
1)压缩预处理网络,
2)设计小而有效的模型。
Han et al.[11]提出了深度压缩技术,包括剪枝、矢量量化和霍夫曼编码,以减小预训练网络的大小。在后一类中,SheeNet[19]构建了一个基于AlexNet的架构,并以少50倍的参数实现了相当的性能水平。MobileNet [16]通过应用Sifre等人[34]中介绍的深度可分卷积建立了一个有效的网络。基于这种简化方法,我们在残差块中应用这种技术,并对其进行一些修改,以实现一个精简的神经网络。
3 提出的方法 Proposed Method
As mentioned in Section 1, we propose two main models: CARN and CARN-M. CARN is designed to make a high-performing SR model while suppressing the number of operations compared to the state-of-the-art methods. Based on CARN, we design CARN-M, which is a much more efficient SR model in terms of both parameters and operations.
正如在第1节中提到的,我们提出了两个主要的模型:CARN和CARN-M。CARN被设计成一个高性能的超分辨率模型,同时与最先进的方法相比,减少了计算操作的数量。在CARN的基础上,我们设计了CARN-M,这是一个在参数和计算操作都更少的超分辨率模型。
3.1 级联残差网络 Cascading Residual Network
Our CARN model is based on ResNet[resnet2016]. The main difference between CARN and ResNet is the presence of local and global cascading modules. Fig. 2 (b) graphically depicts how the global cascading occurs. The outputs of intermediary layers are cascaded into the higher layers, and finally converge on a single 1x1 convolution layer. Note that the intermediary layers are implemented as cascading blocks, which host local cascading connections themselves. Such local cascading operations are shown in (c) and (d). Local cascading is almost identical to global one except that the unit blocks are plain residual blocks.
我们的CARN模型基于ResNet [14]。CARN和ResNet的主要区别在于本地和全局级联模块的存在。
图2 (b)图展示了全局级联方法。中间层的输出被级联到更高的层,并最终通过单个1×1卷积层转换。中间层通过级联模块实现,它们本身承载局部级联连接。这种局部级联操作如图2(c)和(d)所示。局部级联几乎与全局级联相同,只是单位模块是普通的残差模块。
To express the implementation mathematically, let f be a convolution function and τ be an activation function. Then, we can define the i-th residual block Ri, which has two convolutions followed by a residual addition, as
表示为公式:设f为卷积函数,τ为激活函数。然后,我们可以定义第I个剩余块R_i,它
有两个卷积层,后面接残差模块的加法部分
Here, H^i is the output of the i-th residual block, W^i_R is the parameter set of the entire residual block, and W^{i,j}_Ris the parameter of the j-th convolution layer in the i-th block. With this formulation, we denote the output feature of the final residual block of ResNet asH^{u}, which becomes the input to the upsampling block.
这里,H^i是第i个残差块的输出,W^i_R是残差块的参数集,W^{i,j}_R是第i个块中第j个卷积层的参数。我们把ResNet的最终残差块的输出特征表示为H^{u},它成为上采样块的输入。
Note that because our model has a single convolution layer before each residual block, the first residual block gets f(X;Wc) as input, where W_c is the parameter of the convolution layer.
In contrast to ResNet, our CARN model has a local cascading block illustrated in block (c) of Fig. 3 instead of a plain residual block. In here, we denote B^{i,j}as the output of the j-th residual block in the i-th cascading block, and W^i_c as the set of parameters of the i-th local cascading block. Then, the i-th local cascading block B^i_localis defined as
请注意,因为我们的模型在每个残差块之前有一个卷积层,所以第一个残差块得到f(X;Wc)作为输入,其中W_c是卷积层的参数。
与ResNet不一样的是,我们的CARN模型具有图3的块(c)所示的局部级联块,而不是简单的残差块。这里,我们将B^{i,j}表示为第i个级联块中的第j个残差块的输出,将W^i_c表示为第i个本地级联块的参数集。然后,第i个局部级联块B^i_local被定义为
其中,B^{i,U}定义为可以递归调用:
Finally, we define the output feature of the final cascading block H^u by combining both the local and global cascading.
H^0 is the output of the first convolution layer. In our settings, we set u=3 in CARN to match its depth with that of the corresponding ResNet.
最后,我们可以通过结合本地和全局级联来定义最终级联块H^u的输出特征。H^0是第一个卷积层的输出。我们为我们的CARN和CARN-M固定u = b = 3。
其中\hat y_i^{box} 是来自网络的回归目标,y_i^{box}是ground truth坐标。共四个坐标,包括左上,宽和高,故y _i ^{box} \in \mathbb R^4 3) Facial landmark localization: 类似候选框回归任务,人脸关键点定位也化为回归问题,我们最小化欧几里得loss:
The main difference between CARN and ResNet lies in the cascading mechanism. As shown in Fig. 2, CARN has global cascading connections represented as the blue arrows, each of which is followed by an 1×1 convolution layer. Cascading on both the local and global levels has two advantages: 1) The model incorporates features from multiple layers, which allows to learn multi-level representations. 2) Multi-level cascading connection behaves as multi-level shortcut connections that quickly propagate information from lower to higher layers.
Multi-level representation is used in many deep learning methods[lee2017multi, long2015fully] because of its successful performance with simple modifications. Our CARN follows such a scheme, but we apply this arrangement to a variety of feature levels to boost performance, as shown in equation 4. By doing so, our model reconstructs the LR image based on multi-level features. This facilitates the model to restore the details and contexts of the image simultaneously. As a result, our models effectively improve not only primitive objects such as stripes or lines, but also complex objects like hand or street lamps.
Another reason for adapting the cascading scheme is to leverage the effect of shortcut connection. The reason for the improved performance is two fold: First, the propagation of information follows multiple paths. The benefit of multi-path is well discussed in many deep learning models[densenet, unet]. Second, by adding extra convolution layers, our model can learn to choose the right pathway with the given input information flows. However, the strength of multiple shortcut is degraded when we use only one of local or global cascading, especially the local connection. We elaborate the details and present a case study on the effects of cascading mechanism in Section 4.4.
CARN和ResNet的主要区别在于级联机制。如图2所示,CARN具有用蓝色箭头表示的全局级联连接,每个连接后面是1×1卷积层。在局部和全局两个层次上进行级联有两个优点:
1)模型包含来自多个层次的特征,这允许学习多层次的表示。
2)多级级联连接表现为多级快捷连接,可将信息从较低层快速传播到较高层(反向传播时,反之亦然)。
CARN采用了[25,28]中的多级表示方案,但我们如公式4所示将这种方案应用于不同特征级别,以提高性能。通过这样做,我们的模型基于多级特征重建低分辨率图像。这有助于模型同时恢复图像的细节和上下文。因此,我们的模型不仅在原始图像上效果很好,在复杂细节的图像上,也能有效地提高效果。
采用级联方案的另一个原因是有双重目的:
首先,信息的传播可以遵循多条路径[17,32]。
其次,通过增加额外的卷积层,我们的模型可以学习在给定的输入信息流下选择正确的路径。
然而,当我们仅使用本地或全局级联中的一个时,尤其是本地连接时,多重连接的有效性会降低。我们在第4.4节中详细阐述了级联机制的细节并给出了一个案例研究。
3.2高效级联残差网络 EFFICIENT CASCADING RESIDUAL NETWORK
To improve the efficiency of CARN, we first propose an efficient residual (residual-E) block. We use an approach similar to that of MobileNet[mobilenets], but our formulation is more general. Our residual-E block consists of two 3×3 group convolutions and one 1×1 convolution, as shown in Fig. 3 (b). The latter convolution is the same as pointwise convolution, which is used in depthwise separable convolution[mobilenets]. The former convolution is a group extension of the depthwise convolution. The advantage of using group convolution over depthwise convolution is that it makes the efficiency of the model tunable. More precisely, the user can choose the group size appropriately because the group size and performance are usually in a trade-off relationship. The analysis on the cost efficiency of using the residual-E block is as follows.
Let K be the kernel size andC_in,C_out be the number of input and output channels, respectively. Because we retain the feature resolution of the input and output by padding, we can denote F to be both the input and output feature size. The cost of a plain residual block is given as
为了提高CARN的性能,我们提出了一种有效的残差块 (residual-E)。我们使用类似于MobileNet[16]的方法,但是使用分组卷积来代替深度卷积。我们的residual-E残差块由两个3×3的分组卷积和一个pointwise卷积组成,如图3 (b)所示。
与深度卷积相比,使用分组卷积的优势在于它使得模型的性能可调。用户可以适当地选择组大小,因为组大小和性能是一种权衡关系。使用剩余E块的成本效益分析如下。
设K为内核大小,C_in,C_out为输入输出通道数。因为我们通过padding来保持输入和输出的特征分辨率,所以我们可以将F表示为输入和输出特征大小。那么,一个残差模块的计算成本可以表示为。
Note that we only count the cost of convolution layers and ignore the addition or activation because both the plain and the efficient residual blocks have the same amount of cost in terms of addition and activation.
Let G be the group size. Then, the cost of a residual-E block, which consist of two group convolutions and one 1×1 convolution, is as given in equation 6.
请注意,我们只计算卷积层的计算成本,而忽略了加法或激活函数,因为它们在其他设计中,计算成本都是一样的。
设G为分组卷积大小。然后,residual-E残差块的计算成本由两个分组卷积和一个pointwise卷积组成,如等式5所示:
By changing the plain residual block to our efficient residual block, we can reduce the computation by the ratio of
通过将普通残差块改为有效残差块,我们可以将计算量减少
Because our model uses a kernel of size 3×3 for all group convolutions, and the number of channels is constantly 64, using an efficient residual block instead of a standard residual block can reduce the computation from 1.8 up to 14 times depending on the group size. To find the best trade-off between performance and computation, we performed an extensive case study in Section 4.4.
To further reduce the parameters, we apply a technique similar to the one used by the recursive network. That is, we make the parameters of the Cascading blocks shared, effectively making the blocks recursive. Fig. 3 (d) shows our block after applying the recursive scheme. This approach reduces the parameters by up to three times of their original number.
Despite the above measures, the upsampling block is another obstacle, as the number of channels has to be increased quadratically with respect to the upsampling ratio[espcn2016]. Moreover, we use multi-scale learning to boost the performance, so the parameters of the upsampling block are increased by up to 48% in CARN and 75% in CARN-M. To mitigate this problem, we replace the 3×3 convolution layer with a 1×1 convolution inside the upsampling block. This trick reduces the parameters by nine times but with little degradation in performance.
因为我们的模型对所有分组卷积使用大小为3×3的核,并且信道的数量恒定为64,所以使用有效的残差块代替标准残差块可以根据分组大小将计算从1.8减少到14倍。为了找到性能和计算之间的最佳平衡,我们在第4.4节中进行了广泛的案例研究。
为了进一步减少参数,我们应用了类似于递归网络所使用的技术。也就是说,我们共享级联块的参数,有效地使块递归。图3 (d)显示了我们在应用递归方案后的块。这种方法将参数减少到原来的三倍。
3.3与最新技术的比较
对比SRDenseNet。
SRDenseNet[37]使用dense block和skip connection。与我们的模型的区别是:
1)我们使用全局级联,这比skip connection更通用。在SRDenseNet中,各级特征都在最后一个dense block的最后组合,但是我们的全局级联方案连接所有块,这样可以实现多级的skip connection。
2)SRDenseNet通过contact操作保留dense block的局部信息,而我们通过1 × 1卷积层逐步收集。使用额外的1×1卷积层可以有更高的表示能力。
对比MemNet。
MemNet [36]的动机和我们类似。然而,与我们的机制有两个主要区别。
1)在MemNet的存储块内部,每个递归单元的输出特征在网络的末端连接,然后用1×1卷积进行融合。而我们融合局部块中每个可能点的特征,这可以通过附加的卷积层和非线性来提高表示能力。总的来说,由于训练的困难,这种代表力往往得不到满足。然而,我们通过使用本地和全局级联机制克服了这个问题。我们将在第