[论文翻译]深度残差学习在图像识别中的应用


原文地址:https://arxiv.org/pdf/1512.03385v1


Deep Residual Learning for Image Recognition

深度残差学习在图像识别中的应用

Kaiming He Xiangyu Zhang Shaoqing Ren Jian Sun Microsoft Research {kahe, v-xiangz, v-shren, jiansun}@microsoft.com

Kaiming He Xiangyu Zhang Shaoqing Ren Jian Sun 微软研究院 {kahe, v-xiangz, v-shren, jiansun}@microsoft.com

Abstract

摘要

Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers— $.8\times$ deeper than VGG nets [41] but still having lower complexity. An ensemble of these residual nets achieves $3.57%$ error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.

更深的神经网络更难训练。我们提出了一种残差学习框架,以简化比先前网络更深层的训练。我们明确将各层重构为基于层输入学习残差函数,而非学习无参考的函数。通过全面的实证证据表明,这些残差网络更易于优化,并能从大幅增加的深度中获得精度提升。在ImageNet数据集上,我们评估了深度达152层的残差网络——比VGG网络[41]深$8\times$,但复杂度更低。这些残差网络的集成模型在ImageNet测试集上实现了$3.57%$的错误率,该成果获得了ILSVRC 2015分类任务第一名。我们还针对CIFAR-10数据集进行了100层和1000层的深度分析。

The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a $28%$ relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions 1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

表征深度对许多视觉识别任务至关重要。仅凭借我们极深的表征,就在COCO目标检测数据集上实现了28%的相对提升。深度残差网络是我们提交ILSVRC和COCO 2015竞赛的基础方案,这些方案还赢得了ImageNet检测、ImageNet定位、COCO检测和COCO分割任务的第一名。

1. Introduction

1. 引言

Deep convolutional neural networks [22, 21] have led to a series of breakthroughs for image classification [21, 50, 40]. Deep networks naturally integrate low/mid/highlevel features [50] and class if i ers in an end-to-end multilayer fashion, and the “levels” of features can be enriched by the number of stacked layers (depth). Recent evidence [41, 44] reveals that network depth is of crucial importance, and the leading results [41, 44, 13, 16] on the challenging ImageNet dataset [36] all exploit “very deep” [41] models, with a depth of sixteen [41] to thirty [16]. Many other nontrivial visual recognition tasks [8, 12, 7, 32, 27] have also greatly benefited from very deep models.

深度卷积神经网络 [22, 21] 在图像分类领域 [21, 50, 40] 取得了一系列突破性进展。深度网络以端到端的多层方式自然整合了低/中/高层特征 [50] 和分类器,且通过堆叠层数(深度)可以丰富特征的"层级"。最新研究 [41, 44] 表明网络深度至关重要,在极具挑战性的 ImageNet 数据集 [36] 上取得领先成绩的模型 [41, 44, 13, 16] 都采用了"极深度" [41] 架构,深度范围从十六层 [41] 到三十层 [16]。许多其他重要的视觉识别任务 [8, 12, 7, 32, 27] 也从极深度模型中获得了显著提升。


Figure 1. Training error (left) and test error (right) on CIFAR-10 with 20-layer and 56-layer “plain” networks. The deeper network has higher training error, and thus test error. Similar phenomena on ImageNet is presented in Fig. 4.

图 1: 在 CIFAR-10 数据集上使用 20 层和 56 层"普通"网络的训练误差 (左) 和测试误差 (右) 。更深的网络具有更高的训练误差,因此测试误差也更高。图 4 展示了 ImageNet 上的类似现象。

Driven by the significance of depth, a question arises: Is learning better networks as easy as stacking more layers? An obstacle to answering this question was the notorious problem of vanishing/exploding gradients [1, 9], which hamper convergence from the beginning. This problem, however, has been largely addressed by normalized initialization [23, 9, 37, 13] and intermediate normalization layers [16], which enable networks with tens of layers to start converging for stochastic gradient descent (SGD) with backpropagation [22].

受深度重要性的驱动,一个问题随之产生:构建更优网络是否只需简单堆叠更多层?回答该问题的障碍在于臭名昭著的梯度消失/爆炸问题 [1, 9] ——它们从一开始就阻碍了模型收敛。不过,该问题已通过归一化初始化 [23, 9, 37, 13] 和中间归一化层 [16] 得到显著缓解,这些技术使得数十层的网络能够通过反向传播 [22] 的随机梯度下降 (SGD) 开始收敛。

When deeper networks are able to start converging, a degradation problem has been exposed: with the network depth increasing, accuracy gets saturated (which might be unsurprising) and then degrades rapidly. Unexpectedly, such degradation is not caused by over fitting, and adding more layers to a suitably deep model leads to higher training error, as reported in [11, 42] and thoroughly verified by our experiments. Fig. 1 shows a typical example.

当更深的网络能够开始收敛时,一个退化问题暴露出来:随着网络深度的增加,准确率会先达到饱和(这可能并不意外)然后迅速下降。出乎意料的是,这种退化并非由过拟合引起,正如[11, 42]所述以及我们的实验充分验证的那样,在适当深度的模型中添加更多层会导致更高的训练误差。图1展示了一个典型示例。

The degradation (of training accuracy) indicates that not all systems are similarly easy to optimize. Let us consider a shallower architecture and its deeper counterpart that adds more layers onto it. There exists a solution by construction to the deeper model: the added layers are identity mapping, and the other layers are copied from the learned shallower model. The existence of this constructed solution indicates that a deeper model should produce no higher training error than its shallower counterpart. But experiments show that our current solvers on hand are unable to find solutions that are comparably good or better than the constructed solution (or unable to do so in feasible time).

训练准确率的下降表明,并非所有系统都同样易于优化。假设我们有一个较浅的架构和通过添加更多层构建的更深层对应架构。对于深层模型存在一个构造解:添加的层为恒等映射(identity mapping),其余层从已学习的浅层模型中复制而来。这种构造解的存在意味着,深层模型的训练误差不应高于其浅层对应架构。但实验表明,现有求解器无法找到与构造解相当或更优的解(或在可行时间内无法实现)。


Figure 2. Residual learning: a building block.

图 2: 残差学习(Residual Learning)的基本构建模块。

In this paper, we address the degradation problem by introducing a deep residual learning framework. Instead of hoping each few stacked layers directly fit a desired underlying mapping, we explicitly let these layers fit a residual mapping. Formally, denoting the desired underlying mapping as $\mathcal{H}(\mathbf{x})$ , we let the stacked nonlinear layers fit another mapping of $\mathcal{F}(\mathbf{x}):=\mathcal{H}(\mathbf{x})-\mathbf{x}$ . The original mapping is recast into $\mathcal{F}(\mathbf{x})+\mathbf{x}$ . We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.

本文通过引入深度残差学习框架来解决退化问题。我们不再期望每个堆叠层直接拟合期望的底层映射,而是显式地让这些层拟合残差映射。形式上,将期望的底层映射表示为$\mathcal{H}(\mathbf{x})$,我们让堆叠的非线性层拟合另一个映射$\mathcal{F}(\mathbf{x}):=\mathcal{H}(\mathbf{x})-\mathbf{x}$。原始映射被重写为$\mathcal{F}(\mathbf{x})+\mathbf{x}$。我们假设优化残差映射比优化原始的、无参考的映射更容易。极端情况下,如果恒等映射是最优的,那么将残差推向零比通过堆叠非线性层拟合恒等映射更容易。

The formulation of $\mathcal{F}(\mathbf{x})+\mathbf{x}$ can be realized by feedforward neural networks with “shortcut connections” (Fig. 2). Shortcut connections [2, 34, 49] are those skipping one or more layers. In our case, the shortcut connections simply perform identity mapping, and their outputs are added to the outputs of the stacked layers (Fig. 2). Identity shortcut connections add neither extra parameter nor computa- tional complexity. The entire network can still be trained end-to-end by SGD with back propagation, and can be easily implemented using common libraries (e.g., Caffe [19]) without modifying the solvers.

$\mathcal{F}(\mathbf{x})+\mathbf{x}$ 的公式化表达可通过带有"快捷连接" (shortcut connections) 的前馈神经网络实现 (图 2)。快捷连接 [2, 34, 49] 是指跳过一层或多层的连接方式。在我们的方案中,快捷连接仅执行恒等映射 (identity mapping),其输出会与堆叠层的输出相加 (图 2)。恒等快捷连接既不会引入额外参数,也不会增加计算复杂度。整个网络仍可通过反向传播的随机梯度下降法 (SGD) 进行端到端训练,并且无需修改求解器即可使用常见库 (如 Caffe [19]) 轻松实现。

We present comprehensive experiments on ImageNet [36] to show the degradation problem and evaluate our method. We show that: 1) Our extremely deep residual nets are easy to optimize, but the counterpart “plain” nets (that simply stack layers) exhibit higher training error when the depth increases; 2) Our deep residual nets can easily enjoy accuracy gains from greatly increased depth, producing results substantially better than previous networks.

我们在ImageNet [36]上进行了全面实验,以展示退化问题并评估我们的方法。实验表明:1) 我们的极深度残差网络易于优化,但与之对应的"普通"网络(仅简单堆叠层)在深度增加时表现出更高的训练误差;2) 我们的深度残差网络能够轻松从大幅增加的深度中获得精度提升,产生显著优于先前网络的结果。

Similar phenomena are also shown on the CIFAR-10 set [20], suggesting that the optimization difficulties and the effects of our method are not just akin to a particular dataset. We present successfully trained models on this dataset with over 100 layers, and explore models with over 1000 layers.

类似现象在CIFAR-10数据集[20]中同样存在,这表明优化困难及本方法的效果并非特定数据集独有。我们成功在该数据集上训练了超过100层的模型,并探索了超过1000层的模型。

On the ImageNet classification dataset [36], we obtain excellent results by extremely deep residual nets. Our 152- layer residual net is the deepest network ever presented on ImageNet, while still having lower complexity than VGG nets [41]. Our ensemble has $3.57%$ top-5 error on the

在ImageNet分类数据集[36]上,我们通过极深的残差网络取得了优异的结果。我们的152层残差网络是ImageNet上迄今最深的网络,同时仍保持比VGG网络[41]更低的复杂度。我们的集成模型在测试集上取得了3.57%的top-5错误率

ImageNet test set, and won the 1st place in the ILSVRC 2015 classification competition. The extremely deep represent at ions also have excellent generalization performance on other recognition tasks, and lead us to further win the 1st places on: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation in ILSVRC & COCO 2015 competitions. This strong evidence shows that the residual learning principle is generic, and we expect that it is applicable in other vision and non-vision problems.

ImageNet测试集,并在ILSVRC 2015分类竞赛中获得第一名。这种极深的表示方法在其他识别任务上也展现出优异的泛化性能,使我们进一步在ILSVRC & COCO 2015竞赛中斩获:ImageNet检测、ImageNet定位、COCO检测和COCO分割四项冠军。这一有力证据表明残差学习原理具有普适性,我们预期它可应用于其他视觉及非视觉问题。

2. Related Work

2. 相关工作

Residual Representations. In image recognition, VLAD [18] is a representation that encodes by the residual vectors with respect to a dictionary, and Fisher Vector [30] can be formulated as a probabilistic version [18] of VLAD. Both of them are powerful shallow representations for image retrieval and classification [4, 48]. For vector quantization, encoding residual vectors [17] is shown to be more effective than encoding original vectors.

残差表示。在图像识别领域,VLAD [18] 是一种通过相对于字典的残差向量进行编码的表示方法,而Fisher Vector [30] 可视为VLAD的概率化版本 [18]。这两种方法都是图像检索和分类中强大的浅层表示 [4, 48]。对于向量量化,研究表明编码残差向量 [17] 比编码原始向量更有效。

In low-level vision and computer graphics, for solving Partial Differential Equations (PDEs), the widely used Multigrid method [3] reformulates the system as subproblems at multiple scales, where each subproblem is responsible for the residual solution between a coarser and a finer scale. An alternative to Multigrid is hierarchical basis preconditioning [45, 46], which relies on variables that represent residual vectors between two scales. It has been shown [3, 45, 46] that these solvers converge much faster than standard solvers that are unaware of the residual nature of the solutions. These methods suggest that a good reformulation or preconditioning can simplify the optimization.

在低层视觉与计算机图形学中,为求解偏微分方程(PDEs),广泛使用的多重网格法(Multigrid)[3]将系统重构为多尺度下的子问题,每个子问题负责处理粗尺度与细尺度之间的残差解。另一种替代方案是分层基预处理[45,46],该方法依赖于表示两个尺度间残差向量的变量。研究[3,45,46]表明,这些求解器的收敛速度远快于不了解解残差特性的标准求解器。这些方法表明,良好的重构或预处理可以简化优化过程。

Shortcut Connections. Practices and theories that lead to shortcut connections [2, 34, 49] have been studied for a long time. An early practice of training multi-layer perce ptr on s (MLPs) is to add a linear layer connected from the network input to the output [34, 49]. In [44, 24], a few intermediate layers are directly connected to auxiliary class if i ers for addressing vanishing/exploding gradients. The papers of [39, 38, 31, 47] propose methods for centering layer responses, gradients, and propagated errors, implemented by shortcut connections. In [44], an “inception” layer is composed of a shortcut branch and a few deeper branches.

捷径连接。导致捷径连接的实践与理论 [2, 34, 49] 已被研究多年。训练多层感知机 (MLP) 的早期实践是在网络输入到输出之间添加线性连接层 [34, 49]。在 [44, 24] 中,研究者直接将若干中间层连接至辅助分类器以解决梯度消失/爆炸问题。[39, 38, 31, 47] 等论文提出了通过捷径连接实现层响应、梯度和误差传播中心化的方法。[44] 提出的"初始层" (inception layer) 由捷径分支和若干更深分支组成。

Concurrent with our work, “highway networks” [42, 43] present shortcut connections with gating functions [15]. These gates are data-dependent and have parameters, in contrast to our identity shortcuts that are parameter-free. When a gated shortcut is “closed” (approaching zero), the layers in highway networks represent non-residual functions. On the contrary, our formulation always learns residual functions; our identity shortcuts are never closed, and all information is always passed through, with additional residual functions to be learned. In addition, highway networks have not demonstrated accuracy gains with extremely increased depth (e.g., over 100 layers).

与我们工作同时进行的"高速公路网络"[42,43]提出了带有门控功能的快捷连接[15]。这些门控是数据依赖型且具有参数,而我们的恒等快捷连接是无参数的。当门控快捷连接"关闭"(趋近于零)时,高速公路网络中的层表示非残差函数。相反,我们的公式始终学习残差函数;我们的恒等快捷连接永远不会关闭,所有信息始终会通过,同时学习额外的残差函数。此外,高速公路网络在深度极大增加(如超过100层)时并未展现出准确度提升。

3. Deep Residual Learning

3. 深度残差学习

3.1. Residual Learning

3.1. 残差学习

Let us consider $\mathcal{H}(\mathbf{x})$ as an underlying mapping to be fit by a few stacked layers (not necessarily the entire net), with x denoting the inputs to the first of these layers. If one hypothesizes that multiple nonlinear layers can asymptotically approximate complicated functions2, then it is equivalent to hypothesize that they can asymptotically approximate the residual functions, i.e., $\mathcal{H}(\mathbf{x})-\mathbf{x}$ (assuming that the input and output are of the same dimensions). So rather than expect stacked layers to approximate $\mathcal{H}(\mathbf{x})$ , we explicitly let these layers approximate a residual function $\mathcal{F}(\mathbf{x}):=\mathcal{H}(\mathbf{x})-\mathbf{x}$ . The original function thus becomes $\mathcal{F}(\mathbf{x})+\mathbf{x}$ . Although both forms should be able to asymptotically approximate the desired functions (as hypothesized), the ease of learning might be different.

让我们将 $\mathcal{H}(\mathbf{x})$ 视为由若干堆叠层(不一定是整个网络)拟合的底层映射,其中x表示这些层中第一层的输入。如果假设多个非线性层可以渐近逼近复杂函数[2],那么这等价于假设它们可以渐近逼近残差函数,即 $\mathcal{H}(\mathbf{x})-\mathbf{x}$ (假设输入和输出的维度相同)。因此,我们不期望堆叠层逼近 $\mathcal{H}(\mathbf{x})$,而是显式地让这些层逼近残差函数 $\mathcal{F}(\mathbf{x}):=\mathcal{H}(\mathbf{x})-\mathbf{x}$。原始函数因此变为 $\mathcal{F}(\mathbf{x})+\mathbf{x}$。尽管两种形式都应该能够渐近逼近所需函数(如假设所述),但学习的难易程度可能有所不同。

This reformulation is motivated by the counter intuitive phenomena about the degradation problem (Fig. 1, left). As we discussed in the introduction, if the added layers can be constructed as identity mappings, a deeper model should have training error no greater than its shallower counterpart. The degradation problem suggests that the solvers might have difficulties in approximating identity mappings by multiple nonlinear layers. With the residual learning reformulation, if identity mappings are optimal, the solvers may simply drive the weights of the multiple nonlinear layers toward zero to approach identity mappings.

这种重构的动机源于对退化问题(Fig. 1, left)的反直觉现象。正如我们在引言中讨论的,如果增加的层可以被构建为恒等映射(identity mapping),那么更深的模型训练误差不应大于其较浅的对应模型。退化问题表明求解器可能难以通过多个非线性层来逼近恒等映射。通过残差学习的重构,如果恒等映射是最优解,求解器可以简单地通过将多个非线性层的权重趋近于零来实现恒等映射。

In real cases, it is unlikely that identity mappings are optimal, but our reformulation may help to precondition the problem. If the optimal function is closer to an identity mapping than to a zero mapping, it should be easier for the solver to find the perturbations with reference to an identity mapping, than to learn the function as a new one. We show by experiments (Fig. 7) that the learned residual functions in general have small responses, suggesting that identity mappings provide reasonable preconditioning.

在实际情况下,恒等映射 (identity mapping) 不太可能是最优解,但我们的重构可能有助于对问题进行预处理。如果最优函数更接近恒等映射而非零映射,那么求解器参照恒等映射寻找扰动,会比从头学习新函数更容易。实验数据 (图 7) 表明,学习到的残差函数通常具有较小响应值,这说明恒等映射提供了合理的预处理条件。

3.2. Identity Mapping by Shortcuts

3.2. 通过捷径实现恒等映射

We adopt residual learning to every few stacked layers. A building block is shown in Fig. 2. Formally, in this paper we consider a building block defined as:

我们对每几个堆叠层采用残差学习。一个构建块如图 2 所示。在本文中,我们将构建块定义为:

$$
\mathbf{y}={\mathcal{F}}(\mathbf{x},{W_ {i}})+\mathbf{x}.
$$

$$
\mathbf{y}={\mathcal{F}}(\mathbf{x},{W_ {i}})+\mathbf{x}.
$$

Here $\mathbf{x}$ and $\mathbf{y}$ are the input and output vectors of the layers considered. The function $\mathcal{F}(\mathbf{x},{W_ {i}})$ represents the residual mapping to be learned. For the example in Fig. 2 that has two layers, $\mathcal{F}=W_ {2}\sigma(W_ {1}\mathbf{x})$ in which $\sigma$ denotes

这里 $\mathbf{x}$ 和 $\mathbf{y}$ 分别是所考虑层的输入和输出向量。函数 $\mathcal{F}(\mathbf{x},{W_ {i}})$ 表示需要学习的残差映射。对于图 2 中具有两层的示例,$\mathcal{F}=W_ {2}\sigma(W_ {1}\mathbf{x})$,其中 $\sigma$ 表示

ReLU [29] and the biases are omitted for simplifying notations. The operation $\mathcal{F}+\mathbf{x}$ is performed by a shortcut connection and element-wise addition. We adopt the second non linearity after the addition (i.e., $\sigma(\mathbf{y})$ , see Fig. 2).

ReLU [29] 为简化表示省略了偏置项。操作 $\mathcal{F}+\mathbf{x}$ 通过快捷连接和逐元素相加实现。我们在加法后采用第二个非线性变换 (即 $\sigma(\mathbf{y})$,见图 2)。

The shortcut connections in Eqn.(1) introduce neither extra parameter nor computation complexity. This is not only attractive in practice but also important in our comparisons between plain and residual networks. We can fairly compare plain/residual networks that simultaneously have the same number of parameters, depth, width, and computational cost (except for the negligible element-wise addition).

式(1)中的快捷连接既不引入额外参数,也不增加计算复杂度。这在实际应用中极具吸引力,同时对我们比较普通网络和残差网络也至关重要。通过这种方式,我们可以公平地比较具有相同参数量、深度、宽度和计算成本(仅存在可忽略不计的逐元素加法差异)的普通/残差网络。

The dimensions of $\mathbf{x}$ and $\mathcal{F}$ must be equal in Eqn.(1). If this is not the case (e.g., when changing the input/output channels), we can perform a linear projection $W_ {s}$ by the shortcut connections to match the dimensions:

在式 (1) 中,$\mathbf{x}$ 和 $\mathcal{F}$ 的维度必须相等。如果维度不匹配(例如改变输入/输出通道时),我们可以通过快捷连接执行线性投影 $W_ {s}$ 来对齐维度:

$$
\mathbf{y}=\mathcal{F}(\mathbf{x},{W_ {i}})+W_ {s}\mathbf{x}.
$$

$$
\mathbf{y}=\mathcal{F}(\mathbf{x},{W_ {i}})+W_ {s}\mathbf{x}.
$$

We can also use a square matrix $W_ {s}$ in Eqn.(1). But we will show by experiments that the identity mapping is sufficient for addressing the degradation problem and is economical, and thus $W_ {s}$ is only used when matching dimensions.

我们也可以在公式(1)中使用方阵 $W_ {s}$。但实验表明,恒等映射已足以解决退化问题且更经济高效,因此 $W_ {s}$ 仅在匹配维度时使用。

The form of the residual function $\mathcal{F}$ is flexible. Experiments in this paper involve a function $\mathcal{F}$ that has two or three layers (Fig. 5), while more layers are possible. But if $\mathcal{F}$ has only a single layer, Eqn.(1) is similar to a linear layer: ${\bf y}=W_ {1}{\bf x}+{\bf x}$ , for which we have not observed advantages.

残差函数$\mathcal{F}$的形式是灵活的。本文实验涉及具有两到三层的$\mathcal{F}$函数(图5),但也可以有更多层。但如果$\mathcal{F}$只有单层,则式(1)类似于线性层:${\bf y}=W_ {1}{\bf x}+{\bf x}$,我们尚未观察到其优势。

We also note that although the above notations are about fully-connected layers for simplicity, they are applicable to convolutional layers. The function $\mathcal{F}(\mathbf{x},{W_ {i}})$ can represent multiple convolutional layers. The element-wise addition is performed on two feature maps, channel by channel.

我们还注意到,虽然上述符号为了简化以全连接层为例,但它们同样适用于卷积层。函数$\mathcal{F}(\mathbf{x},{W_ {i}})$可以表示多个卷积层。逐元素相加是在两个特征图上逐通道进行的。

3.3. Network Architectures

3.3. 网络架构

We have tested various plain/residual nets, and have observed consistent phenomena. To provide instances for discussion, we describe two models for ImageNet as follows.

我们测试了多种普通/残差网络,并观察到了一致的现象。为了提供讨论实例,我们针对ImageNet描述以下两个模型。

Plain Network. Our plain baselines (Fig. 3, middle) are mainly inspired by the philosophy of VGG nets [41] (Fig. 3, left). The convolutional layers mostly have $3\times3$ filters and follow two simple design rules: (i) for the same output feature map size, the layers have the same number of filters; and (ii) if the feature map size is halved, the number of filters is doubled so as to preserve the time complexity per layer. We perform down sampling directly by convolutional layers that have a stride of 2. The network ends with a global average pooling layer and a 1000-way fully-connected layer with softmax. The total number of weighted layers is 34 in Fig. 3 (middle).

普通网络 (Plain Network)。我们的普通基线网络 (图 3,中) 主要受 VGG 网络 [41] (图 3,左) 设计理念启发。卷积层大多采用 $3\times3$ 滤波器,并遵循两条简单设计原则:(i) 对于相同尺寸的输出特征图,各层滤波器数量保持一致;(ii) 当特征图尺寸减半时,滤波器数量加倍以保持单层时间复杂度。我们直接采用步长为 2 的卷积层实现下采样。网络末端包含全局平均池化层和带 softmax 的 1000 维全连接层。图 3 (中) 所示结构的加权层总数达 34 层。

It is worth noticing that our model has fewer filters and lower complexity than VGG nets [41] (Fig. 3, left). Our 34- layer baseline has 3.6 billion FLOPs (multiply-adds), which is only $18%$ of VGG-19 (19.6 billion FLOPs).

值得注意的是,我们的模型比VGG网络[41]具有更少的过滤器和更低的复杂度 (图 3,左)。我们的34层基线模型仅产生36亿次浮点运算(乘加运算),仅为VGG-19(196亿次浮点运算)的$18%$。


Figure 3. Example network architectures for ImageNet. Left: the VGG-19 model [41] (19.6 billion FLOPs) as a reference. Middle: a plain network with 34 parameter layers (3.6 billion FLOPs). Right: a residual network with 34 parameter layers (3.6 billion FLOPs). The dotted shortcuts increase dimensions. Table 1 shows more details and other variants.

图 3: ImageNet 的网络架构示例。左图:作为参考的 VGG-19 模型 [41] (196 亿 FLOPs)。中图:具有 34 个参数层的普通网络 (36 亿 FLOPs)。右图:具有 34 个参数层的残差网络 (36 亿 FLOPs)。虚线快捷连接用于增加维度。表 1 展示了更多细节和其他变体。

Residual Network. Based on the above plain network, we insert shortcut connections (Fig. 3, right) which turn the network into its counterpart residual version. The identity shortcuts (Eqn.(1)) can be directly used when the input and output are of the same dimensions (solid line shortcuts in Fig. 3). When the dimensions increase (dotted line shortcuts in Fig. 3), we consider two options: (A) The shortcut still performs identity mapping, with extra zero entries padded for increasing dimensions. This option introduces no extra parameter; (B) The projection shortcut in Eqn.(2) is used to match dimensions (done by $1\times1$ convolutions). For both options, when the shortcuts go across feature maps of two sizes, they are performed with a stride of 2.

残差网络 (Residual Network)。在上述普通网络的基础上,我们插入快捷连接(图 3 右),将网络转变为对应的残差版本。当输入和输出维度相同时,可以直接使用恒等快捷连接(公式 (1))(图 3 中的实线快捷连接)。当维度增加时(图 3 中的虚线快捷连接),我们考虑两种方案:(A) 快捷连接仍执行恒等映射,通过填充零值来增加维度。该方案不引入额外参数;(B) 使用公式 (2) 中的投影快捷连接(通过 $1\times1$ 卷积实现)来匹配维度。对于这两种方案,当快捷连接跨越两种尺寸的特征图时,都以步长 2 执行。

3.4. Implementation

3.4. 实现

Our implementation for ImageNet follows the practice in [21, 41]. The image is resized with its shorter side randomly sampled in [256, 480] for scale augmentation [41]. A $224\times224$ crop is randomly sampled from an image or its horizontal flip, with the per-pixel mean subtracted [21]. The standard color augmentation in [21] is used. We adopt batch normalization (BN) [16] right after each convolution and before activation, following [16]. We initialize the weights as in [13] and train all plain/residual nets from scratch. We use SGD with a mini-batch size of 256. The learning rate starts from 0.1 and is divided by 10 when the error plateaus, and the models are trained for up to $60\times10^{4}$ iterations. We use a weight decay of 0.0001 and a momentum of 0.9. We do not use dropout [14], following the practice in [16].

我们在ImageNet上的实现遵循[21, 41]的做法。图像短边会随机缩放到[256, 480]区间以进行尺度增强[41]。从图像或其水平翻转中随机裁剪出$224\times224$区域,并减去像素均值[21]。采用[21]中的标准色彩增强方法。按照[16]的做法,我们在每个卷积层后、激活函数前直接应用批归一化(BN)[16]。权重初始化采用[13]的方法,所有普通/残差网络均从头开始训练。使用小批量大小为256的SGD优化器,初始学习率为0.1,当误差停滞时学习率除以10,模型训练总迭代次数达$60\times10^{4}$次。权重衰减系数为0.0001,动量为0.9。遵循[16]的方案,我们没有使用dropout[14]。

In testing, for comparison studies we adopt the standard 10-crop testing [21]. For best results, we adopt the fullyconvolutional form as in [41, 13], and average the scores at multiple scales (images are resized such that the shorter side is in ${224,256,384,480,640}$ ).

在测试阶段,为进行对比研究,我们采用标准10-crop测试[21]。为获得最佳结果,我们采用[41, 13]中的全卷积形式,并在多尺度下对分数进行平均(图像尺寸调整至短边为${224,256,384,480,640}$)。

4. Experiments

4. 实验

4.1. ImageNet Classification

4.1. ImageNet 分类

We evaluate our method on the ImageNet 2012 classification dataset [36] that consists of 1000 classes. The models are trained on the 1.28 million training images, and evaluated on the $50\mathrm{k\Omega}$ validation images. We also obtain a final result on the $100\mathrm{k}$ test images, reported by the test server. We evaluate both top-1 and top-5 error rates.

我们在包含1000个类别的ImageNet 2012分类数据集[36]上评估了我们的方法。模型在128万张训练图像上进行训练,并在$50\mathrm{k\Omega}$验证图像上进行评估。我们还通过测试服务器获得了$100\mathrm{k}$测试图像的最终结果。我们同时评估了top-1和top-5错误率。

Plain Networks. We first evaluate 18-layer and 34-layer plain nets. The 34-layer plain net is in Fig. 3 (middle). The 18-layer plain net is of a similar form. See Table 1 for detailed architectures.

普通网络。我们首先评估了18层和34层的普通网络。34层普通网络如图 3 (中) 所示。18层普通网络结构类似。详细架构见表 1。

The results in Table 2 show that the deeper 34-layer plain net has higher validation error than the shallower 18-layer plain net. To reveal the reasons, in Fig. 4 (left) we compare their training/validation errors during the training procedure. We have observed the degradation problem - the

表 2 中的结果表明,34 层的普通网络比 18 层的普通网络具有更高的验证误差。为了揭示原因,我们在图 4 (左) 中比较了它们在训练过程中的训练/验证误差。我们观察到了退化问题 -

layernameoutputsize18-layer34-layer50-layer101-layer152-layer
conv1112×1127×7,64,stride 2
conv2_ x56×563x3maxpool,stride 2
3×3,64 x2 3×3,643×3,64 3×3,64X31×1, 64 3×3,64 1×1,256 1x1,128x31×1, 64 3×3,64 1x1,256x31x1, 64 3×3,64 ×3 1×1,256
conv3_ x28×283×3,128 x2 3×3,1283×3,128 3×3,128×43×3,128 1×1,512 1x1,256X41×1,128 3×3,128 1×1,512X41×1,128 3×3,128 1×1,512×8
conv4_ x14×143×3,256 ×2 3×3,2563×3,256 3×3,256x63×3,256 1×1,1024x61x1,256 3×3,256 1×1,1024×231x1,256 3x3,256 1×1,1024x36
conv5_ x7×73×3,512 x2 3×3,5123×3,512 3×3,512×31×1,512 3×3,512 1x1,2048x31×1,512 3×3,512 1×1,2048×31×1,512 3×3,512 1×1,2048×3
1×1 FLOPsaverage pool,1000-d fc,softmax11.3×109
1.8×1093.6×1093.8×1097.6×109
层名 输出尺寸 18层 34层 50层 101层 152层
conv1 112×112 7×7,64,步长2 7×7,64,步长2 7×7,64,步长2 7×7,64,步长2 7×7,64,步长2
conv2_ x 56×56 3×3最大池化,步长2 3×3最大池化,步长2 3×3最大池化,步长2 3×3最大池化,步长2 3×3最大池化,步长2
3×3,64 ×2 3×3,64 3×3,64 3×3,64 ×3 1×1,64 3×3,64 1×1,256 ×3 1×1,64 3×3,64 1×1,256 ×3 1×1,64 3×3,64 ×3 1×1,256
conv3_ x 28×28 3×3,128 ×2 3×3,128 3×3,128 3×3,128 ×4 3×3,128 1×1,512 1×1,256 ×4 1×1,128 3×3,128 1×1,512 ×4 1×1,128 3×3,128 1×1,512 ×8
conv4_ x 14×14 3×3,256 ×2 3×3,256 3×3,256 3×3,256 ×6 3×3,256 1×1,1024 ×6 1×1,256 3×3,256 1×1,1024 ×23 1×1,256 3×3,256 1×1,1024 ×36
conv5_ x 7×7 3×3,512 ×2 3×3,512 3×3,512 3×3,512 ×3 1×1,512 3×3,512 1×1,2048 ×3 1×1,512 3×3,512 1×1,2048 ×3 1×1,512 3×3,512 1×1,2048 ×3
1×1 FLOPs 平均池化,1000维全连接,softmax 平均池化,1000维全连接,softmax 平均池化,1000维全连接,softmax 平均池化,1000维全连接,softmax 11.3×10⁹
1.8×10⁹ 3.6×10⁹ 3.8×10⁹ 7.6×10⁹

Table 1. Architectures for ImageNet. Building blocks are shown in brackets (see also Fig. 5), with the numbers of blocks stacked. Downsampling is performed by conv3 1, conv4 1, and conv5 1 with a stride of 2.

表 1: ImageNet架构。构建块显示在括号中 (另见图 5),并列出了堆叠的块数。下采样由步长为2的conv3_ 1、conv4_ 1和conv5_ 1执行。


Figure 4. Training on ImageNet. Thin curves denote training error, and bold curves denote validation error of the center crops. Left: plain networks of 18 and 34 layers. Right: ResNets of 18 and 34 layers. In this plot, the residual networks have no extra parameter compared to their plain counterparts.

图 4: ImageNet 训练结果。细线表示训练误差,粗线表示中心裁剪的验证误差。左图:18 层和 34 层的普通网络。右图:18 层和 34 层的 ResNet (残差网络)。本图中,残差网络与其对应的普通网络相比没有额外参数。

Table 2. Top-1 error $%$ , 10-crop testing) on ImageNet validation. Here the ResNets have no extra parameter compared to their plain counterparts. Fig. 4 shows the training procedures.

plainResNet
18 layers27.9427.88
34layers28.5425.03

表 2. ImageNet验证集上的Top-1错误率(单位:%,10-crop测试)。此处ResNet与其对应的普通网络相比没有额外参数。图4展示了训练过程。

plain ResNet
18层网络 27.94 27.88
34层网络 28.54 25.03

34-layer plain net has higher training error throughout the whole training procedure, even though the solution space of the 18-layer plain network is a subspace of that of the 34-layer one.

34层普通网络在整个训练过程中具有更高的训练误差,尽管18层普通网络的解空间是34层网络解空间的子集。

We argue that this optimization difficulty is unlikely to be caused by vanishing gradients. These plain networks are trained with BN [16], which ensures forward propagated signals to have non-zero variances. We also verify that the backward propagated gradients exhibit healthy norms with BN. So neither forward nor backward signals vanish. In fact, the 34-layer plain net is still able to achieve competitive accuracy (Table 3), suggesting that the solver works to some extent. We conjecture that the deep plain nets may have exponentially low convergence rates, which impact the reducing of the training error3. The reason for such optimization difficulties will be studied in the future.

我们认为这种优化困难不太可能是由梯度消失引起的。这些普通网络采用批量归一化(BN) [16]进行训练,该方法能确保前向传播信号具有非零方差。我们还验证了在BN作用下,反向传播梯度保持健康范数。因此前向和反向信号均未消失。实际上,34层普通网络仍能取得具有竞争力的准确率(表3),表明求解器在一定程度上是有效的。我们推测深度普通网络可能具有指数级低收敛率,这会影响训练误差的降低3。此类优化困难的原因将在未来研究中探讨。

Residual Networks. Next we evaluate 18-layer and 34- layer residual nets (ResNets). The baseline architectures are the same as the above plain nets, expect that a shortcut connection is added to each pair of $3\times3$ filters as in Fig. 3 (right). In the first comparison (Table 2 and Fig. 4 right), we use identity mapping for all shortcuts and zero-padding for increasing dimensions (option A). So they have no extra parameter compared to the plain counterparts.

残差网络 (Residual Networks)。接下来我们评估18层和34层的残差网络 (ResNets)。基线架构与上述普通网络相同,只是如图 3 (右) 所示,在每个 $3\times3$ 滤波器对之间添加了快捷连接。在第一次比较中 (表 2 和图 4 右),我们对所有快捷连接使用恒等映射 (identity mapping),并通过零填充 (zero-padding) 来增加维度 (选项 A)。因此,与普通网络相比,它们没有额外的参数。

We have three major observations from Table 2 and Fig. 4. First, the situation is reversed with residual learning – the 34-layer ResNet is better than the 18-layer ResNet (by $2.8%$ ). More importantly, the 34-layer ResNet exhibits considerably lower training error and is general iz able to the validation data. This indicates that the degradation problem is well addressed in this setting and we manage to obtain accuracy gains from increased depth.

从表 2 和图 4 中我们得出三个主要观察结果。首先,残差学习 (residual learning) 的情况发生了逆转——34 层 ResNet 的表现优于 18 层 ResNet (提升 $2.8%$)。更重要的是,34 层 ResNet 展现出明显更低的训练误差,并且能够很好地泛化到验证数据。这表明在该设置下退化问题得到了有效解决,我们成功通过增加深度获得了精度提升。

Second, compared to its plain counterpart, the 34-layer

其次,与其普通版本相比,34层

Table 3. Error rates $%$ , 10-crop testing) on ImageNet validation. VGG-16 is based on our test. ResNet-50/101/152 are of option B that only uses projections for increasing dimensions.

modeltop-1 err.top-5 err.
VGG-16[41]28.079.33
GoogLeNet [44]9.15
PReLU-net [13]24.277.38
plain-3428.5410.02
ResNet-34 A25.037.76
ResNet-34 B24.527.46
ResNet-34 C24.197.40
ResNet-5022.856.71
ResNet-10121.756.05
ResNet-15221.435.71

表 3. ImageNet验证集上的错误率 (10-crop测试, 单位$%$)。VGG-16基于我们的测试结果。ResNet-50/101/152采用仅通过投影增加维度的B选项。

model top-1 err. top-5 err.
VGG-16[41] 28.07 9.33
GoogLeNet [44] - 9.15
PReLU-net [13] 24.27 7.38
plain-34 28.54 10.02
ResNet-34 A 25.03 7.76
ResNet-34 B 24.52 7.46
ResNet-34 C 24.19 7.40
ResNet-50 22.85 6.71
ResNet-101 21.75 6.05
ResNet-152 21.43 5.71

Table 4. Error rates $(%)$ of single-model results on the ImageNet validation set (except † reported on the test set).

methodtop-1 err.top-5 err.
VGG[41] (ILSVRC'14)8.43f
GoogLeNet [44] (ILSVRC'14)7.89
VGG [41] (v5)24.47.1
PReLU-net[13]21.595.71
BN-inception [16]21.995.81
ResNet-34B21.845.71
ResNet-34C21.535.60
ResNet-5020.745.25
ResNet-10119.874.60
ResNet-15219.384.49

表 4. ImageNet验证集上单模型结果的错误率 $(%)$ (除†标注为测试集结果)。

方法 top-1错误率 top-5错误率
VGG[41] (ILSVRC'14) 8.43
GoogLeNet [44] (ILSVRC'14) 7.89
VGG [41] (v5) 24.4 7.1
PReLU-net[13] 21.59 5.71
BN-inception [16] 21.99 5.81
ResNet-34B 21.84 5.71
ResNet-34C 21.53 5.60
ResNet-50 20.74 5.25
ResNet-101 19.87 4.60
ResNet-152 19.38 4.49
methodtop-5 err. (test)
VGG[41] (ILSVRC'14)7.32
GoogLeNet [44] (ILsVRC'14)6.66
VGG[41](v5)6.8
PReLU-net[13] BN-inception[16]4.94
ResNet(ILSVRC'15)4.82 3.57

Table 5. Error rates $(%)$ of ensembles. The top-5 error is on the test set of ImageNet and reported by the test server.

方法 前五错误率 (测试集)
VGG[41] (ILSVRC'14) 7.32
GoogLeNet[44] (ILSVRC'14) 6.66
VGG41 6.8
PReLU-net[13] BN-inception[16] 4.94
ResNet(ILSVRC'15) 4.82 3.57

表5: 集成模型的错误率$(%)$。前五错误率基于ImageNet测试集,由测试服务器报告。

ResNet reduces the top-1 error by $3.5%$ (Table 2), resulting from the successfully reduced training error (Fig. 4 right vs. left). This comparison verifies the effectiveness of residual learning on extremely deep systems.

ResNet 将 top-1 错误率降低了 $3.5%$ (表 2),这源于训练误差的成功降低 (图 4 右 vs. 左)。这一对比验证了残差学习在极深系统中的有效性。

Last, we also note that the 18-layer plain/residual nets are comparably accurate (Table 2), but the 18-layer ResNet converges faster (Fig. 4 right vs. left). When the net is “not overly deep” (18 layers here), the current SGD solver is still able to find good solutions to the plain net. In this case, the ResNet eases the optimization by providing faster convergence at the early stage.

最后,我们还注意到18层的普通/残差网络准确率相当(表2),但18层ResNet收敛速度更快(图4右 vs. 左)。当网络"不过度深"(此处为18层)时,当前的SGD求解器仍能为普通网络找到良好解。在这种情况下,ResNet通过早期提供更快的收敛速度来简化优化过程。

Identity vs. Projection Shortcuts. We have shown that parameter-free, identity shortcuts help with training. Next we investigate projection shortcuts (Eqn.(2)). In Table 3 we compare three options: (A) zero-padding shortcuts are used for increasing dimensions, and all shortcuts are parameterfree (the same as Table 2 and Fig. 4 right); (B) projection shortcuts are used for increasing dimensions, and other shortcuts are identity; and (C) all shortcuts are projections.

身份映射与投影捷径对比。我们已证明无参数的恒等捷径有助于训练。接下来我们研究投影捷径(公式(2))。表3比较了三种方案:(A) 使用零填充捷径进行维度提升,所有捷径均为无参数(与表2和图4右相同);(B) 使用投影捷径进行维度提升,其他捷径保持恒等;(C) 所有捷径均为投影。


Figure 5. A deeper residual function $\mathcal{F}$ for ImageNet. Left: a building block (on $56\times56$ feature maps) as in Fig. 3 for ResNet34. Right: a “bottleneck” building block for ResNet-50/101/152.

图 5: ImageNet 使用的更深层残差函数 $\mathcal{F}$。左图: ResNet34 采用的构建块(基于 $56\times56$ 特征图),与图 3 结构一致。右图: ResNet-50/101/152 采用的"瓶颈"结构构建块。

Table 3 shows that all three options are considerably better than the plain counterpart. B is slightly better than A. We argue that this is because the zero-padded dimensions in A indeed have no residual learning. C is marginally better than B, and we attribute this to the extra parameters introduced by many (thirteen) projection shortcuts. But the small differences among A/B/C indicate that projection shortcuts are not essential for addressing the degradation problem. So we do not use option C in the rest of this paper, to reduce memory/time complexity and model sizes. Identity shortcuts are particularly important for not increasing the complexity of the bottleneck architectures that are introduced below.

表 3 显示,所有三个选项都明显优于普通版本。B 略优于 A。我们认为这是因为 A 中的零填充维度确实没有残差学习。C 比 B 稍好,我们将此归因于多个 (十三个) 投影捷径引入的额外参数。但 A/B/C 之间的微小差异表明,投影捷径对于解决退化问题并非必需。因此,在本文后续部分我们不使用选项 C,以降低内存/时间复杂度和模型大小。恒等捷径对于不增加下文介绍的瓶颈架构复杂度尤为重要。

Deeper Bottleneck Architectures. Next we describe our deeper nets for ImageNet. Because of concerns on the training time that we can afford, we modify the building block as a bottleneck design4. For each residual function $\mathcal{F}$ , we use a stack of 3 layers instead of 2 (Fig. 5). The three layers are $1\times1$ , $3\times3$ , and $1\times1$ convolutions, where the $1\times1$ layers are responsible for reducing and then increasing (restoring) dimensions, leaving the $3\times3$ layer a bottleneck with smaller input/output dimensions. Fig. 5 shows an example, where both designs have similar time complexity.

更深的瓶颈架构。接下来我们介绍为ImageNet设计的更深网络。出于对可承受训练时间的考虑,我们将构建块修改为瓶颈设计[4]。对于每个残差函数$\mathcal{F}$,我们使用3层堆叠而非2层(图5)。这三层分别是$1\times1$、$3\times3$和$1\times1$卷积,其中$1\times1$层负责先降低后恢复维度,使$3\times3$层成为具有较小输入/输出维度的瓶颈。图5展示了两种时间复杂度相近的设计实例。

The parameter-free identity shortcuts are particularly important for the bottleneck architectures. If the identity shortcut in Fig. 5 (right) is replaced with projection, one can show that the time complexity and model size are doubled, as the shortcut is connected to the two high-dimensional ends. So identity shortcuts lead to more efficient models for the bottleneck designs.

无参数的恒等捷径连接对于瓶颈架构尤为重要。如果将图5(右)中的恒等捷径替换为投影连接,可以证明时间复杂度和模型大小都会翻倍,因为该捷径连接到了两个高维端。因此恒等捷径能为瓶颈设计带来更高效的模型。

50-layer ResNet: We replace each 2-layer block in the

50层ResNet:我们将每个2层块替换为

34-layer net with this 3-layer bottleneck block, resulting in a 50-layer ResNet (Table 1). We use option B for increasing dimensions. This model has 3.8 billion FLOPs.

采用这种3层瓶颈块结构的34层网络,最终形成50层ResNet (表1)。我们使用选项B来实现维度扩展。该模型具有38亿次FLOPs运算量。

101-layer and 152-layer ResNets: We construct 101- layer and 152-layer ResNets by using more 3-layer blocks (Table 1). Remarkably, although the depth is significantly increased, the 152-layer ResNet (11.3 billion FLOPs) still has lower complexity than VGG-16/19 nets (15.3/19.6 billion FLOPs).

101层和152层ResNet:我们通过使用更多3层块构建了101层和152层ResNet (表1)。值得注意的是,尽管深度显著增加,152层ResNet (113亿FLOPs) 的复杂度仍低于VGG-16/19网络 (153/196亿FLOPs)。

The 50/101/152-layer ResNets are more accurate than the 34-layer ones by considerable margins (Table 3 and 4). We do not observe the degradation problem and thus enjoy significant accuracy gains from considerably increased depth. The benefits of depth are witnessed for all evaluation metrics (Table 3 and 4).

50/101/152层的ResNet比34层的准确率有显著提升 (表3和表4)。我们没有观察到退化问题,因此通过大幅增加深度获得了显著的精度提升。在所有评估指标上都观察到了深度带来的优势 (表3和表4)。

Comparisons with State-of-the-art Methods. In Table 4 we compare with the previous best single-model results. Our baseline 34-layer ResNets have achieved very competitive accuracy. Our 152-layer ResNet has a single-model top-5 validation error of $4.49%$ . This single-model result outperforms all previous ensemble results (Table 5). We combine six models of different depth to form an ensemble (only with two 152-layer ones at the time of submitting). This leads to $3.57%$ top-5 error on the test set (Table 5). This entry won the 1st place in ILSVRC 2015.

与现有最佳方法的比较。在表4中,我们与之前最佳单模型结果进行了对比。我们的基线34层ResNets取得了极具竞争力的准确率。152层ResNet的单模型top-5验证错误率为$4.49%$。这一单模型结果超越了之前所有的集成方法结果(表5)。我们组合了六个不同深度的模型形成集成(提交时仅包含两个152层模型),在测试集上实现了$3.57%$的top-5错误率(表5)。该方案获得了ILSVRC 2015竞赛第一名。

4.2. CIFAR-10 and Analysis

4.2. CIFAR-10 与分析

We conducted more studies on the CIFAR-10 dataset [20], which consists of $50\mathrm{k\Omega}$ training images and 10k testing images in 10 classes. We present experiments trained on the training set and evaluated on the test set. Our focus is on the behaviors of extremely deep networks, but not on pushing the state-of-the-art results, so we intentionally use simple architectures as follows.

我们在CIFAR-10数据集[20]上进行了更多研究,该数据集包含$50\mathrm{k\Omega}$训练图像和10k测试图像,共10个类别。我们展示了在训练集上训练并在测试集上评估的实验。我们的重点在于极深度网络的行为,而非追求最先进的结果,因此我们有意采用如下简单架构。

The plain/residual architectures follow the form in Fig. 3 (middle/right). The network inputs are $32\times32$ images, with the per-pixel mean subtracted. The first layer is $3\times3$ convolutions. Then we use a stack of $6n$ layers with $3\times3$ convolutions on the feature maps of sizes ${32,16,8}$ respectively, with $2n$ layers for each feature map size. The numbers of filters are ${16,32,64}$ respectively. The sub sampling is performed by convolutions with a stride of 2. The network ends with a global average pooling, a 10-way fully-connected layer, and softmax. There are totally $6n{+}2$ stacked weighted layers. The following table summarizes the architecture:

普通/残差架构遵循图3(中/右)的形式。网络输入为$32\times32$图像,已减去逐像素均值。第一层是$3\times3$卷积。随后我们使用一组$6n$层,分别在尺寸为${32,16,8}$的特征图上进行$3\times3$卷积,每个特征图尺寸对应$2n$层。滤波器数量分别为${16,32,64}$。下采样通过步长为2的卷积实现。网络以全局平均池化、10路全连接层和softmax作为结尾。总计有$6n{+}2$个堆叠的加权层。下表总结了该架构:

output map size32×3216×168×8
#layers1+2n2n2n
#filters163264

| 输出映射尺寸 | 32×32 | 16×16 | 8×8 |
| 层数 | 1+2n | 2n | 2n |
| 滤波器数量 | 16 | 32 | 64 |

When shortcut connections are used, they are connected to the pairs of $3\times3$ layers (totally $3n$ shortcuts). On this dataset we use identity shortcuts in all cases (i.e., option A), so our residual models have exactly the same depth, width, and number of parameters as the plain counterparts.

当使用快捷连接时,它们会连接到成对的$3\times3$层(共$3n$个快捷连接)。在该数据集上,我们在所有情况下都使用恒等快捷连接(即选项A),因此我们的残差模型与普通模型具有完全相同的深度、宽度和参数数量。

Table 6. Classification error on the CIFAR-10 test set. All methods are with data augmentation. For ResNet-110, we run it 5 times and show “best (mean $\pm$ std)” as in [43].

methoderror (%) 9.38
Maxout [10] NIN [25] DSN [24]8.81 8.22
#layers# params
FitNet [35] Highway [42,43] Highway [42,43]19 19 322.5M 2.3M 1.25M8.39 7.54 (7.72±0.16) 8.80
ResNet ResNet200.27M8.75
ResNet32 440.46M 0.66M7.51
7.17
ResNet560.85M6.97
ResNet1101.7M6.43 (6.61±0.16)
ResNet120219.4M7.93

表 6. CIFAR-10 测试集上的分类错误率。所有方法均采用数据增强。对于 ResNet-110,我们运行了 5 次并按照 [43] 的方式展示 "最佳 (均值 $\pm$ 标准差)"。

方法 层数 参数量 错误率 (%)
Maxout [10] NIN [25] DSN [24] - - 9.38 8.81 8.22
FitNet [35] Highway [42,43] Highway [42,43] 19 19 32 2.5M 2.3M 1.25M 8.39 7.54 (7.72±0.16) 8.80
ResNet 20 0.27M 8.75
ResNet 32 44 0.46M 0.66M 7.51 7.17
ResNet 56 0.85M 6.97
ResNet 110 1.7M 6.43 (6.61±0.16)
ResNet 1202 19.4M 7.93

We use a weight decay of 0.0001 and momentum of 0.9, and adopt the weight initialization in [13] and BN [16] but with no dropout. These models are trained with a minibatch size of 128 on two GPUs. We start with a learning rate of 0.1, divide it by 10 at 32k and $48\mathrm{k\Omega}$ iterations, and terminate training at $64\mathrm{k\Omega}$ iterations, which is determined on a $45\mathrm{k}/5\mathrm{k}$ train/val split. We follow the simple data augmentation in [24] for training: 4 pixels are padded on each side, and a $32\times32$ crop is randomly sampled from the padded image or its horizontal flip. For testing, we only evaluate the single view of the original $32\times32$ image.

我们使用权重衰减 (weight decay) 为0.0001,动量为0.9,并采用[13]中的权重初始化方法和BN[16],但不使用dropout。这些模型在两个GPU上以128的小批量进行训练。初始学习率为0.1,在32k和$48\mathrm{k\Omega}$次迭代时除以10,并在$64\mathrm{k\Omega}$次迭代时终止训练(基于$45\mathrm{k}/5\mathrm{k}$的训练/验证集划分)。训练时采用[24]中的简单数据增强方法:每边填充4像素,然后从填充后的图像或其水平翻转中随机裁剪出$32\times32$的区域。测试时仅评估原始$32\times32$图像的单视角。

We compare $n={3,5,7,9}$ , leading to 20, 32, 44, and 56-layer networks. Fig. 6 (left) shows the behaviors of the plain nets. The deep plain nets suffer from increased depth, and exhibit higher training error when going deeper. This phenomenon is similar to that on ImageNet (Fig. 4, left) and on MNIST (see [42]), suggesting that such an optimization difficulty is a fundamental problem.

我们比较了 $n={3,5,7,9}$ ,分别对应20层、32层、44层和56层网络。图6 (左) 展示了普通网络的表现。随着深度增加,深层普通网络的训练误差会上升,这种现象与ImageNet (图4,左) 和MNIST数据集 [42] 上的情况类似,表明这种优化困难是一个本质性问题。

Fig. 6 (middle) shows the behaviors of ResNets. Also similar to the ImageNet cases (Fig. 4, right), our ResNets manage to overcome the optimization difficulty and demonstrate accuracy gains when the depth increases.

图 6 (middle) 展示了 ResNets 的行为表现。与 ImageNet 案例(图 4, right)类似,我们的 ResNets 成功克服了优化难题,并在深度增加时展现出准确率提升。

We further explore $n=18$ that leads to a 110-layer ResNet. In this case, we find that the initial learning rate of 0.1 is slightly too large to start converging 5. So we use 0.01 to warm up the training until the training error is below $80%$ (about 400 iterations), and then go back to 0.1 and continue training. The rest of the learning schedule is as done previously. This 110-layer network converges well (Fig. 6, middle). It has fewer parameters than other deep and thin networks such as FitNet [35] and Highway [42] (Table 6), yet is among the state-of-the-art results ${\it6.43%}$ , Table 6).

我们进一步探索了$n=18$的情况,这会产生一个110层的ResNet。在这种情况下,我们发现初始学习率0.1略大,导致无法开始收敛[5]。因此我们使用0.01进行训练预热,直到训练误差降至$80%$以下(约400次迭代),然后恢复使用0.1继续训练。其余学习率调度方式与之前相同。这个110层网络收敛良好(图6,中)。相比FitNet[35]和Highway[42]等其他深层瘦身网络(表6),它的参数量更少,却达到了当前最优结果之一(${\it6.43%}$,表6)。


Figure 6. Training on CIFAR-10. Dashed lines denote training error, and bold lines denote testing error. Left: plain networks. The error of plain-110 is higher than $60%$ and not displayed. Middle: ResNets. Right: ResNets with 110 and 1202 layers.

图 6: CIFAR-10 上的训练结果。虚线表示训练误差,粗线表示测试误差。左图:普通网络。plain-110 的误差超过 60% 未显示。中图:ResNet。右图:具有 110 层和 1202 层的 ResNet。


Figure 7. Standard deviations (std) of layer responses on CIFAR10. The responses are the outputs of each $3\times3$ layer, after BN and before non linearity. Top: the layers are shown in their original order. Bottom: the resp