# SIFT Meets CNN: A Decade Survey of Instance Retrieval 当 SIFT 遇上 CNN：图像检索任务十年总结

###### ABSTRACT

In the early days, content-based image retrieval (CBIR) was studied with global features. Since 2003, image retrieval based on local descriptors (de facto SIFT) has been extensively studied for over a decade due to the advantage of SIFT in dealing with image transformations. Recently, image representations based on the convolutional neural network (CNN) have attracted increasing interest in the community and demonstrated impressive performance. Given this time of rapid evolution, this article provides a comprehensive survey of instance retrieval over the last decade. Two broad categories, SIFT-based and CNN-based methods, are presented. For the former, according to the codebook size, we organize the literature into using large/medium-sized/small codebooks. For the latter, we discuss three lines of methods, i.e., using pre-trained or fine-tuned CNN models, and hybrid methods. The first two perform a single-pass of an image to the network, while the last category employs a patch-based feature extraction scheme. This survey presents milestones in modern instance retrieval, reviews a broad selection of previous works in different categories, and provides insights on the connection between SIFT and CNN-based methods. After analyzing and comparing retrieval performance of different categories on several datasets, we discuss promising directions towards generic and specialized instance retrieval.

Instance retrieval, SIFT, convolutional neural network, literature survey.

## Introduction

Content-based image retrieval (CBIR) has been a long-standing research topic in the computer vision society. In the early 1990s, the study of CBIR truly started. Images were indexed by the visual cues, such as texture and color, and a myriad of algorithms and image retrieval systems have been proposed. A straightforward strategy is to extract global descriptors. This idea dominated the image retrieval community in the 1990s and early 2000s. Yet, a well-known problem is that global signatures may fail the invariance expectation to image changes such as illumination, translation, occlusion and truncation. These variances compromise the retrieval accuracy and limit the application scope of global descriptors. This problem has given rise to local feature based image retrieval.

The focus of this survey is instance-level image retrieval. In this task, given a query image depicting a particular object/scene/architecture, the aim is to retrieve images containing the same object/scene/architecture that may be captured under different views, illumination, or with occlusions. Instance retrieval departs from class retrieval [1] in that the latter aims at retrieving images of the same class with the query. In the following, if not specified, we use “image retrieval” and “instance retrieval” interchangeably.

The milestones of instance retrieval in the past years are presented in Fig. 1, in which the times of the SIFT-based and CNN-based methods are highlighted. The majority of traditional methods can be considered to end in 2000 when Smeulders et al. [2] presented a comprehensive survey of CBIR “at the end of the early years”. Three years later (2003) the Bag-of-Words (BoW) model was introduced to the image retrieval community [3], and in 2004 was applied to image classification [4], both relying on the SIFT descriptor [5]. The retrieval community has since witnessed the prominence of the BoW model for over a decade during which many improvements were proposed. In 2012, Krizhevsky et al. [6] with the AlexNet achieved the state-of-the-art recognition accuracy in ILSRVC 2012, exceeding previous best results by a large margin. Since then, research focus has begun to transfer to deep learning based methods [7, 8, 9, 10], especially the convolutional neural network (CNN).

Fig. 1: Milestones of instance retrieval.After a survey of methods before the year 2000 by Smeulders et al [2], Sivic and Zisserman [3] proposed Video Google in 2003, marking the beginning of the BoW model. Then, the hierarchical k-means and approximate k-means were proposed by Stewénius and Nistér [11] and Philbin et al. [12], respectively, marking the use of large codebooks in retrieval. In 2008, Jégou et al. [13] proposed Hamming Embedding, a milestone in using medium-sized codebooks. Then, compact visual representations for retrieval were proposed by Perronnin et al. [14] and Jégou et al. [15] in 2010. Although SIFT-based methods were still moving forward, CNN-based methods began to gradually take over, following the pioneering work of Krizhevsky et al. [6]. In 2014, Razavian et al. [7] proposed a hybrid method extracting multiple CNN features from an image. Babenko et al. [8] were the first to fine-tune a CNN model for generic instance retrieval. Both [9, 10] employ the column features from pre-trained CNN models, and [10] inspires later state-of-the-art methods. These milestones are the representative works of the categorization scheme in this survey.

The SIFT-based methods mostly rely on the BoW model. BoW was originally proposed for modeling documents because the text is naturally parsed into words. It builds a word histogram for a document by accumulating word responses into a global vector. In the image domain, the introduction of the scale-invariant feature transform (SIFT) makes the BoW model feasible . Originally, SIFT is comprised of a detector and descriptor, but which are used in isolation now; in this survey, if not specified, SIFT usually refers to the 128-dim descriptor, a common practice in the community. With a pre-trained codebook (vocabulary), local features are quantized to visual words. An image can thus be represented in a similar form to a document, and classic weighting and indexing schemes can be leveraged.

In recent years, the popularity of SIFT-based models seems to be overtaken by the convolutional neural network (CNN), a hierarchical structure that has been shown to outperform hand-crafted features in many vision tasks. In retrieval, competitive performance compared to the BoW models has been reported, even with short CNN vectors . The CNN-based retrieval models usually compute compact representations and employ the Euclidean distance or some approximate nearest neighbor (ANN) search methods for retrieval. Current literature may directly employ the pre-trained CNN models or perform fine-tuning for specific retrieval tasks. A majority of these methods feed the image into the network only once to obtain the descriptor. Some are based on patches which are passed to the network multiple times, a similar manner to SIFT; we classify them into hybrid methods in this survey.

### Organization of This Paper 本文结构

Upon the time of change, this paper provides a comprehensive literature survey of both the SIFT-based and CNN-based instance retrieval methods. We first present the categorization methodology in Section 2. We then describe the two major method types in Section 3 and Section 4, respectively. On several benchmark datasets, Section 5 summarizes the comparisons between SIFT- and CNN-based methods. In Section 6, we point out two possible future directions. This survey will be concluded in Section 7.

## Categorization Methodology 分类方法

TABLE I: Major differences between various types of instance retrieval models. For SIFT-based methods, hand-crafted local invariant features are extracted, and according to the codebook sizes, different encoding and indexing strategies are leveraged. For CNN-based methods, pre-trained, fine-tuned CNN models and hybrid methods are the primary types; fixed-length compact vectors are usually produced, combined with approximate nearest neighbor (ANN) methods.

According to the different visual representations, this survey categorizes the retrieval literature into two broad types: SIFT-based and CNN-based. The SIFT-based methods are further organized into three classes: using large, medium-sized or small codebooks. We note that the codebook size is closely related to the choice of encoding methods. The CNN-based methods are categorized into using pre-trained or fine-tuned CNN models, as well as hybrid methods. Their similarities and differences are summarized in Table I.

The SIFT-based methods had been predominantly studied before 2012 [6] (good works also appear in recent years [18, 19]). This line of methods usually use one type of detector, e.g., Hessian-Affine, and one type of descriptor, e.g., SIFT. Encoding maps a local feature into a vector. Based on the size of the codebook used during encoding, we classify SIFT-based methods into three categories as below.

• [leftmargin=0pt]
• Using small codebooks. The visual words are fewer than several thousand. Compact vectors are generated [15, 14] before dimension reduction and coding.
• Using medium-sized codebooks. Given the sparsity of BoW and the low discriminative ability of visual words, the inverted index and binary signatures are used [13]. The trade-off between accuracy and efficiency is a major influencing factor [20].
• Using large codebooks. Given the sparse BoW histograms and the high discriminative ability of visual words, the inverted index and memory-friendly signatures are used [21]. Approximate methods are used in codebook generation and encoding [11, 12].

• 使用小型编码本。视觉词汇少于几千个，紧凑编码向量在降维和编码之前生成。
• 使用中型编码本。鉴于 BoW 的稀疏性和视觉词汇的低区分度，使用倒排索引和二进制签名方法。准确率和效率间的权衡是影响算法的主要因素。
• 使用大型编码本。鉴于 BoW 直方图的稀疏性和视觉词汇的高区分度，在算法中使用了倒排索引和存储友好型的签名方式。在编码本的生成和编码中使用了类似的方法。

The CNN-based methods extract features using CNN models. Compact (fixed-length) representations are usually built. There are three classes:

• Hybrid methods. Image patches are fed into CNN multiple times for feature extraction [7]. Encoding and indexing are similar to SIFT-based methods [22].

• Using pre-trained CNN models. Features are extracted in a single pass using CNN pre-trained on some large-scale datasets like ImageNet [23]. Compact Encoding/pooling techniques are used [9, 10].

• Using fine-tuned CNN models. The CNN model (e.g., pre-trained on ImageNet) is fine-tuned on a training set in which the images share similar distributions with the target database [8]. CNN features can be extracted in an end-to-end manner through a single pass to the CNN model. The visual representations exhibit improved discriminative ability [24, 17].

基于 CNN 的方法使用 CNN 模型提取特征，建立紧凑向量（固定长度）。它们也分为三类：

• 混合型方法。图像块被多次输入进 CNN 用于特征提取。编码与索引方法和基于 SIFT 的检索方法近似。

• 使用预训练的模型。通过在大规模图像集（例如 ImageNet）上预训练的 CNN 模型进行单通道传播提取特征。使用紧凑编码/池化技术进行检索。

• 使用微调的模型。在训练图像与目标数据库具有相似的分布的训练集上，对 CNN 模型进行微调。通过单通道 CNN 模型，运用端到端的方法提取出 CNN 特征。这种视觉表示方法提升了模型的区分能力。

## SIFT-based Image Retrieval 基于 SIFT 特征的图像检索系统

### Pipeline

The pipeline of SIFT-based retrieval is introduced in Fig. pipeline. Suppose we have a gallery $\mathcal{G}$ consisting of $N$ images. Given a feature detector, we extract local descriptors from the regions around the sparse interest points or dense patches. We denote the local descriptors of $D$ detected regions in an image as ${f_i}{i = i}^D, f_i\in\mathbb{R}^p$ . SIFT-based methods train a codebook offline. Each visual word in the codebook lies in the center of a subspace, called the Voronoi cell. A larger codebook corresponds to a finer partitioning, resulting in more discriminative visual words and vice versa. Suppose that a pool of local descriptors $\mathcal{F}\equiv{f_i}{i=1}^M$ are computed from an unlabeled training set. The baseline approach, i.e., k-means, partitions the $M$ points into $K$ clusters; the $K$ visual words thus constitute a codebook of size $K$ . A local descriptor $f_i \in\mathbb{R}^p$ is mapped into a feature embedding $g_i \in\mathbb{R}^l$ through the feature encoding process, $f_i \rightarrow g_i$ . When k-means clustering is used, $f_i$ can be encoded according to its distances to the visual words. For large codebooks, hard and soft quantization are good choices. In the former, the resulting embedding $g_i$ has only one non-zero entry; in the latter, $f_i$ can be quantized to a small number of visual words. A global signature is produced after a sum-pooling of all the embeddings of local features. For medium-sized codebooks, additional binary signatures can be generated to preserve the original information. When using small codebooks, popular encoding schemes include vector of locally aggregated descriptors (VLAD) , Fisher vector (FV) , etc.

Fig. 2: A general pipeline of SIFT- and CNN-based retrieval models. Features are computed from hand-crafted detectors for SIFT, and densely applied filters or image patches for CNN. In both methods, under small codebooks, encoding/pooling is employed to produce compact vectors. In SIFT-based methods, the inverted index is necessary under large/medium-sized codebooks. The CNN features can also be computed in an end-to-end way using fine-tuned CNN models.

### Local Feature Extraction 局部特征提取

Local invariant features aim at accurate matching of local structures between images [26]. SIFT-based methods usually share a similar feature extraction step composed of a feature detector and a descriptor.

Local detector. The interest point detectors aim to reliably localize a set of stable local regions under various imaging conditions. In the retrieval community, finding affine-covariant regions has been preferred. It is called “covariant” because the shapes of the detected regions change with the affine transformations, so that the region content (descriptors) can be invariant. This kind of detectors are different from keypoint-centric detectors such as the Hessian detector [27], and from those focusing on scale-invariant regions such as the difference of Gaussians (DoG) [5] detector. Elliptical regions which are adapted to the local intensity patterns are produced by affine detectors. This ensures that the same local structure is covered under deformations caused by viewpoint variances, a problem often encountered in instance retrieval. In the milestone work [3], the Maximally Stable Extremal Region (MSER) detector [28] and the affine extended Harris-Laplace detector are employed, both of which are affine-invariant region detectors. MSER is used in several later works [29, 11]. Starting from [12], the Hessian-affine detector [30] has been widely adopted in retrieval. It has been shown to be superior to the difference of Gaussians (DoG) detector [13, 31], due to its advantage in reliably detecting local structures under large viewpoint changes. To fix the orientation ambiguity of these affine-covariant regions, the gravity assumption is made [32]. The practice which dismisses the orientation estimation is employed by later works [33, 34] and demonstrates consistent improvement on architecture datasets where the objects are usually upright. Other non-affine detectors have also been tested in retrieval, such as the Laplacian of Gaussian (LOG) and Harris detectors used in [35]. For objects with smooth surfaces [36], few interest points can be detected, so the object boundaries are good candidates for local description.

On the other hand, some employ the dense region detectors. In the comparison between densely sampled image patches and the detected patches, Sicre et al. [37] report the superiority of the former. To recover the rotation invariance of dense sampling, the dominant angle of patches is estimated in [38]. A comprehensive comparison of various dense sampling strategies, the interest point detectors, and those in between can be accessed in [39].

Local Descriptor. With a set of detected regions, descriptors encode the local content. SIFT [5] has been used as the default descriptor. The 128-dim vector has been shown to outperform competing descriptors in matching accuracy [40]. In an extension, PCA-SIFT [41] reduces the dimension from 128 to 36 to speed up the matching process at the cost of more time in feature computation and loss of distinctiveness. Another improvement is RootSIFT [33], calculated by two steps: 1) ℓ1 normalize the SIFT descriptor, 2) square root each element. RootSIFT is now used as a routine in SIFT-based retrieval. Apart from SIFT, SURF [42] is also widely used. It combines the Hessian-Laplace detector and a local descriptor of the local gradient histograms. The integral image is used for acceleration. SURF has a comparable matching accuracy with SIFT and is faster to compute. See [43] for comparisons between SIFT, PCA-SIFT, and SURF. To further accelerate the matching speed, binary descriptors [44] replace Euclidean distance with Hamming distance during matching.

Apart from hand-crafted descriptors, some also propose learning schemes to improve the discriminative ability of local descriptors. For example, Philbin et al. [45] proposes a non-linear transformation so that the projected SIFT descriptor yields smaller distances for true matches. Simoyan et al. [34] improve this process by learning both the pooling region and a linear descriptor projection.

### Retrieval Using Small Codebooks 使用小规模编码本进行检索

A small codebook has several thousand, several hundred or fewer visual words, so the computational complexity of codebook generation and encoding is moderate. Representative works include BoW , VLAD and FV . We mainly discuss VLAD and FV and refer readers to for a comprehensive evaluation of the BoW compact vectors.

#### Codebook Generation 生成编码本

Clustering complexity depends heavily on the codebook size. In works based on VLAD or FV , the codebook sizes are typically small, e.g., 64, 128, 256. For VLAD, flat k-means is employed for codebook generation. For FV, the Gaussian mixture model (GMM), i.e., $u_{\lambda}(x) = \sum_{i=1}^K w_i u_i(x)$ , where $K$ is the number of Gaussian mixtures, is trained using the maximum likelihood estimation. GMM describes the feature space with a mixture of $K$ Gaussian distributions, and can be denoted as $\lambda = {w_i, \mu_i, \sum_i, i = 1,...,K}$ , where $w_i$ , $\mu_i$ and $\sum_i$ represent the mixture weight, the mean vector and the covariance matrix of Gaussian $u_i$ , respectively.

#### Encoding 编码方法

Due to the small codebook size, relative complex and information-preserving encoding techniques can be applied. We mainly describe FV, VLAD and their improvements in this section. With a pre-trained GMM model, FV describes the averaged first and second order difference between local features and the GMM centers. Its dimension is $2pK$ , where $p$ is the dimension of the local descriptors and $K$ is the codebook size of GMM. FV usually undergoes power normalization , to suppress the burstiness problem (to be described in Section tf-idf). In this step, each component of FV undergoes non-linear transformation featured by parameter $\alpha$ , $x_i := {sign}(x_i)|x_i|^\alpha$ . Then $\ell_2$ normalization is employed. Later, FV is improved from different aspects. For example, Koniusz et al. augment each descriptor with its spatial coordinates and associated tunable weights. In , larger codebooks (up to 4,096) are generated and demonstrate superior classification accuracy to smaller codebooks, at the cost of computational efficiency. To correct the assumption that local regions are identically and independently distributed (iid), Cinbis et al. propose non-iid models that discount the burstiness effect and yield improvement over the power normalization. The VLAD encoding scheme proposed by J'gou et al. can be thought of as a simplified version of FV. It quantizes a local feature to its nearest visual word in the codebook and records the difference between them. Nearest neighbor search is performed because of the small codebook size. The residual vectors are then aggregated by sum pooling followed by normalizations. The dimension of VLAD is $pK$ . Comparisons of some important encoding techniques are presented in . Again, the improvement of VLAD comes from multiple aspects. In , J'gou and Chum suggest the usage of PCA and whitening (denoted as PCA $_w$ in Table compare_methods_datasets) to de-correlate visual word co-occurrences, and the training of multiple codebooks to reduce quantization loss. In , Arandjeloviet al. extend VLAD in three aspects: 1) normalize the residual sum within each coarse cluster, called intra-normalization, 2) vocabulary adaptation to address the dataset transfer problem and 3) multi-VLAD for small object discovery.

VLAD 编码方案由 Jégou 提出，可以认为 VLAD 是 FV 的简化版本。VLAD 量化将局部特征量化为最近邻视觉词汇，同时记录下两者的距离。由于编码本规模小，因此最近邻检索方案可行。在残差向量被总和池化聚合后进行归一化。VLAD 的维度是$pK$。同样，研究人员对 VLAD 在多方面进行了改进。Jégou 和 Chum 提出使用 PCA 和白化（在表 5 中表示为 PCA $_w$））去消除视觉词语共现现象，并且训练多个编码本以减少量化带来的损失。Arandjelovi 等从三个方面拓展了 VLAD：1）归一化每个粗糙聚类中的残差和，称为内部归一化，2）通过词汇自适应来解决数据集迁移问题，3）用多 VLAD 模型(multi-VLAD)来解决小目标发掘问题。

Concurrent to , Delhumeau et al. propose to normalize each residual vector instead of the residual sums; they also advocate for local PCA within each Voronoi cell which does not perform dimension reduction as . A recent work employs soft assignment and empirically learns optimal weights for each rank to improve over the hard quantization. Note that some general techniques benefit various embedding methods, such as VLAD, FV, BoW, locality-constrained linear coding (LLC) and monomial embeddings. To improve the discriminative ability of embeddings, Tolias et al. propose the orientation covariant embedding to encode the dominant orientation of the SIFT regions jointly with the SIFT descriptor. It achieves a similar covariance property to weak geometric consistency (WGC) by using geometric cues within regions of interest so that matching points with similar dominant orientations are up-weighted and vice versa. The triangulation embedding only considers the direction instead of the magnitude of the input vectors. J'gou et al. also present a democratic aggregation that limits the interference between the mapped vectors. Baring a similar idea with democratic aggregation, Murray and Perronnin propose the generalized max pooling (GMP) optimized by equalizing the similarity between the pooled vector and each coding representation. The computational complexity of BoW, VLAD and FV is similar. We neglect the offline training and SIFT extraction steps. During visual word assignment, each feature should compute its distance (or soft assignment coefficient) with all the visual words (or Gaussians) for VLAD (or FV). So this step has a complexity of $\mathcal{O}(pK)$ . In the other steps, complexity does not exceed $\mathcal{O}(pK)$ . Considering the sum-pooling of the embeddings, the encoding process has an overall complexity of $\mathcal{O}(pKD)$ , where $D$ is the number of features in an image. Triangulation embedding , a variant of VLAD, has a similar complexity. The complexity of multi-VLAD is $\mathcal{O}(pKD)$ , too, but it has a more costly matching process. Hierarchical VLAD has a complexity of $\mathcal{O}(pKK'D)$ , where $K'$ is the size of the secondary codebook. In the aggregation stage, both GMP and democratic aggregation have high complexity. The complexity of GMP is $\mathcal{O}(\frac{P^2}{K})$ , where $P$ is the dimension of the feature embedding, while the computational cost of democratic aggregation comes from the Sinkhorn algorithm.

Delhumeau 等提出应该将残差向量归一化，而不是求残差和；他们还提倡在每个 Voronoi 格内进行 PCA 降维，而不是像《Aggregating local image descriptors into compact codes》中所提出的降维方案。《Improving large-scale image retrieval through robust aggregation of local descriptors》中提出应该使用软任务和经验性地为每个等级学习最佳权重来改进硬量化方案。

BoW，VLAD 和 FV 的复杂度基本一致。我们忽视线下训练时间和 SIFT 特征提取时间。在视觉词汇分配这一步中，VLAD（FV）模型中每一个特征（高斯曲线）需要计算和每一个视觉词汇的距离（soft assignment coefficient），因此这一步的计算复杂度为$\mathcal{O}(pK)$。在其他步骤中，复杂性不超过$\mathcal{O}(pK)$。考虑到嵌入式的总和，编码过程具有$\mathcal{O}(pKD)$的总体复杂性，其中$D$表示图像中提取的特征的数目。三角嵌入式 VLAD 的一个变异体，它和 VLAD 的计算复杂度相似，都是 $\mathcal{O}(pKD)$，但它具有更昂贵的匹配过程。分层 VLAD 具有$\mathcal{O}(pKK'D)$的复杂性，其中$K'$是辅助码本的大小。在聚合阶段，GMP 和民主党两者都具有很高的复杂性。 GMP 的复杂性是$\mathcal{O}(\frac{P^2}{K})$，其中$P$是特征嵌入的维度， 民主聚类方法的计算复杂度主要来源于 Sinkhorn 算法。

#### ANN Search 最似最近邻检索

Due to the high dimensionality of the VLAD/FV embeddings, efficient compression and ANN search methods have been employed [61, 62]. For example, the principle component analysis (PCA) is usually adapted to for dimension reduction, and it is shown that retrieval accuracy even increases after PCA [53]. For hashing-based ANN methods, Perronnin et al. [47] use standard binary encoding techniques such as locality sensitive hashing [63] and spectral hashing [64]. Nevertheless, when being tested on the SIFT and Gist feature datasets, spectral hashing is shown to be outperformed by Product Quantization (PQ) [61]. In these quantization-based ANN methods, PQ is demonstrated to be better than other popular ANN methods such as FLANN [62] as well. A detailed discussion of VLAD and PQ can be viewed in [65]. PQ has since then been improved in a number of works. In [66], Douze et al. propose to re-order the cluster centroids so that adjacent centroids have small Hamming distances. This method is compatible with Hamming distance based ANN search, which offers significant speedup for PQ. We refer readers to [67] for a survey of ANN approaches.

We also mention an emerging ANN technique, i.e., group testing [68], [69], [70]. In a nutshell, the database is decomposed into groups, each represented by a group vector. Comparisons between the query and group vectors reveal how likely a group contains a true match. Since group vectors are much fewer than the database vectors, search time is reduced. Iscen et al. [69] propose to directly find the best group vectors summarizing the database without explicitly forming the groups, which reduces the memory consumption.

Fig. 3: Two milestone clustering methods (a) hierarchical k-means (HKM) [11] and (b) approximate k-means (AKM) [12] for large codebook generation. Bold borders and blue discs are the clustering boundaries and centers of the first layer of HKM. Slim borders and red squares are the final clustering results in both methods.

### Retrieval Using Large Codebooks 使用大规模编码本进行检索

A large codebook may contain 1 million visual words or more . Some major steps undergo important changes compared with using small codebooks.

#### Codebook Generation 生成编码本

Approximate methods are critical in assigning data into a large number of clusters. In the retrieval community, two representative works are hierarchical k-means (HKM) and approximate k-means (AKM) , as illustrated in Fig. timeline and Fig. clustering. Proposed in 2006, HKM applies standard k-means on the training features hierarchically. It first partitions the points into a few clusters (e.g., $\bar{k}\ll K$ ) and then recursively partitions each cluster into further clusters. In every recursion, each point should be assigned to one of the $\bar{k}$ clusters, with the depth of the cluster tree being $\mathcal{O}(\log K)$ , where $K$ is the target cluster number. The computational cost of HKM is therefore $\mathcal{O}(\bar{k}M\log K)$ , where $M$ is the number of training samples. It is much smaller than the complexity of flat k-means $\mathcal{O}(MK)$ when $K$ is large (a large codebook). The other milestone in large codebook generation is AKM . This method indexes the $K$ cluster centers using a forest of random $k$ -d trees so that the assignment step can be performed efficiently with ANN search. In AKM, the cost of assignment can be written as $\mathcal{O}(K\log K + vM\log K) = \mathcal{O}(vM\log K)$ , where $v$ is the number of nearest cluster candidates to be accessed in the $k$ -d trees. So the computational complexity of AKM is on par with HKM and is significantly smaller than flat k-means when $K$ is large. Experiments show that AKM is superior to HKM due to its lower quantization error (see Section quantization_large). In most AKM-based methods, the default choice for ANN search is FLANN .

#### Feature Encoding (Quantization) 特征编码（量化）

Feature encoding is interleaved with codebook clustering, because ANN search is critical in both components. The ANN techniques implied in some classic methods like AKM and HKM can be used in both clustering and encoding steps. Under a large codebook, the key trade-off is between quantization error and computational complexity. In the encoding step, information-preserving encoding methods such as FV [14], sparse coding [73] are mostly infeasible due to their computational complexity. It therefore remains a challenging problem how to reduce the quantization error while keeping the quantization process efficient.

Fro the ANN methods, the earliest solution is to quantize a local feature along the hierarchical tree structure [11]. Quantized tree nodes in different levels are assigned different weights. However, due to the highly imbalanced tree structure, this method is outperformed by k-d tree based quantization method [12]: one visual word is assigned to each local feature, using a k-d tree built from the codebook for fast ANN search. In an improvement to this hard quantization scheme, Philbin et al. [25] propose soft quantization by quantizing a feature into several nearest visual words. The weight of each assigned visual word relates negatively to its distance from the feature by $\exp(-\frac{d^2}{2\sigma^2})$ , where $d$ is the distance between the descriptor and the cluster center. While soft quantization is based on the Euclidean distance, Mikulik et al. propose to find relevant visual words for each visual word through an unsupervised set of matching features. Built on a probabilistic model, these alternative words tend to contain descriptors of matching features. To reduce the memory cost of soft quantization and the number of query visual words, Cai et al. suggest that when a local feature is far away from even the nearest visual word, this feature can be discarded without a performance drop. To further accelerate quantization, scalar quantization suggests that local features be quantized without an explicitly trained codebook. A floating-point vector is binarized, and the first dimensions of the resulting binary vector are directly converted to a decimal number as a visual word. In the case of large quantization error and low recall, scalar quantization uses bit-flop to generate hundreds of visual words for a local feature.

#### Feature WeightingTF-IDF. TF-IDF 特征加权

The visual words in codebook $\mathcal{C}$ are typically assigned specific weights, called the term frequency and inverse document frequency (TF-IDF), which are integrated with the BoW encoding. TF is defined as:

$${TF}(c_i^j) = o_i^j,$$
where $o_i^j$ is the number of occurrences of a visual word $c_i$ within an image $j$ . TF is thus a local weight. IDF, on the other hand, determines the contribution of a given visual word through global statistics. The classic IDF weight of visual word $c_i$ is calculated as:
，其中$o_i^j$是图像$j$内的视觉字$c_i$的出现次数。因此，TF 是局部重量。另一方面，IDF 通过全局统计来确定给定的视觉单词的贡献。计算出视觉字$c_i$的经典 IDF 重量为：

$${IDF}(c_i) = \log{\frac{N}{n_i}}, {where} n_i = \sum_{j\in \mathcal{G}}{1}(o_i^j > 0),$$
where $N$ is the number of gallery images, and $n_i$ encodes the number of images in which word $c_i$ appears. The TF-IDF weight for visual word $c_i$ in image $j$ is:
，其中$N$是图库图像的数量，$n_i$编码一个显示字$c_i$的图像的数量。图像$j$中的可视字$c_i$的 tf-idf 重量是：

$$w(c_i^j) = {TF}(c_i^j){IDF}(c_i).$$

A major problem associated with visual word weighting is burstiness . It refers to the phenomenon whereby repetitive structures appear in an image. This problem tends to dominate image similarity. Jgou et al. propose several TF variants to deal with burstiness. An effective strategy consists in exerting a square operation on TF. Instead of grouping features with the same word index, Revaud et al. propose detecting keypoint groups frequently happening in irrelevant images which are down-weighted in the scoring function. While the above two methods detect bursty groups after quantization, Shi et al propose detecting them in the descriptor stage. The detected bursty descriptors undergo average pooling and are fed in the BoW architectures. From the aspect of IDF, Zheng et al. propose the $\mathcal{L}_p$ -norm IDF to tackle burstiness and Murata et al. design the exponential IDF which is later incorporated into the BM25 formula. When most works try to suppress burstiness, Torii et al view it as a distinguishing feature for architectures and design new similarity measurement following burstiness detection. Another feature weighting strategy is feature augmentation on the database side , . Both methods construct an image graph offline, with edges indicating whether two images share a same object. For , only features that pass the geometric verification are preserved, which reduces the memory cost. Then, the feature of the base image is augmented with all the visual words of its connecting images. This method is improved in by only adding those visual words which are estimated to be visible in the augmented image, so that noisy visual words can be excluded.

#### The Inverted Index 倒排

The inverted index is designed to enable efficient storage and retrieval and is usually used under large/medium-sized codebooks. Its structure is illustrated in Fig. inv_index. The inverted index is a one-dimensional structure where each entry corresponds to a visual word in the codebook. An inverted list is attached to each word entry, and those indexed in the each inverted list are called indexed features or postings. The inverted index takes advantages of the sparse nature of the visual word histogram under a large codebook.

In literature, it is required that new retrieval methods be adjustable to the inverted index. In the baseline , the image ID and term frequency (TF) are stored in a posting. When other information is integrated, they should be small in size. For example, in , the metadata are quantized, such as descriptor contextual weight, descriptor density, mean relative log scale and the mean orientation difference in each posting. Similarly, quantized spatial information such as the orientation can also be stored , . In co-indexing , when the inverted index is enlarged with globally consistent neighbors, semantically isolated images are deleted to reduce memory consumption. In , the original one-dimensional inverted index is expanded to two-dimensional for ANN search, which learns a codebook for each SIFT sub-vector. Later, it is applied to instance retrieval by to fuse local color and SIFT descriptors.

。新的文献提出新的检索方法来适应倒排算法。在基准方案中，图像 ID 和 TF 值都被存储在一条记录中。但其他的信息被整合进来时，它们的尺寸应该足够小。例如，在《Contextual weighting for vocabulary tree based image retrieval》中，原始数据在一条记录中被描述符上下文，描述符密度，平均关联日志规模和平均方向差异等属性量化。相似地，方向等空间信息也会被量化。在联合检索的方法中，当倒排索引随着全局一直近邻增长是，单独分割的图片将会被删除以减少内存消耗。在《The inverted multi-index》中提出，原始的单一尺寸倒排索引可以拓展为二维结构来进行替代了 SITF 特征向量的 ANN 检索。后来，这种方法被《Packing and padding: Coupled multi-index for accurate image retrieval》改进，融合局部颜色和 SIFT 描述符进行实例检索。

### Retrieval Using Medium-sized Codebooks 使用中等规模编码本进行检索

Medium-sized codebooks refer to those having 10-200k visual words. The visual words exhibit medium discriminative ability, and the inverted index is usually constructed.

#### Codebook Generation and Quantization 编码本的生成与量化

Considering the relatively small computational cost compared with large codebooks (Section codebook_large), flat k-means can be adopted for codebook generation , . It is also shown in that using AKM for clustering also yields very competitive retrieval accuracy. For quantization, nearest neighbor search can be used to find the nearest visual words in the codebook. Practice may tell that using some strict ANN algorithms produces competitive retrieval results. So comparing with the extensive study on quantization under large codebooks (Section quantization_large) , relatively fewer works focus on the quantization problem under a medium-sized codebook.

#### Hamming Embedding and its improvements 汉明嵌入算法及其改进

The discriminative ability of visual words in medium-sized codebooks lies in between that of small and large codebooks. So it is important to compensate the information loss during quantization. To this end, a milestone work, i.e., Hamming embedding (HE) has been dominantly employed. Proposed by Jgou et al. , HE greatly improves the discriminative ability of visual words under medium-sized codebooks. HE first maps a SIFT descriptor $f\in\mathbb{R}^p$ from the $p$ -dimensional space to a $p_b$ -dimensional space:

HE 算法由 Jégou 等人在论文《Hamming embedding and weak geometric consistency for large scale image search》中提出，它提升了中等规模编码本下视觉词汇的区分能力，也是首次将一个 SIFT 描述符$f\in\mathbb{R}^p$从$p$ -dimensional 空间映射到$p_b$ -dimension 空间：

$$x = P\cdot f = (x_1,...x_{p_b}),$$

where $P\in\mathbb{R}^p_b\times p$ is a projecting matrix, and $x$ is a low-dimensional vector. By creating a matrix of random Gaussian values and applying a QR factorization to it, matrix $P$ is taken as the first $p_b$ rows of the resulting orthogonal matrix. To binarize $x$ , Jegou et al. propose to compute the median vector $\overline{x_i} = (\overline{x_{1,i}},...,\overline{x_{p_b, i}})$ of the low-dimensional vector using descriptors falling in each Voronoi cell $c_i$ . Given descriptor $f$ and its projected vector $x$ , HE computes its visual word $c_t$ , and the HE binary vector is computed as:

$$b_j(x) = 1 ， if x_j > \overline{x_{j, t}}, 0，otherwise$$

where $b(x) = (b_1(x),...,b_{p_b}(x))$ is the resulting HE vector of dimension $p_b$ . The binary feature $b(x)$ serves as a secondary check for feature matching. A pair of local features are a true match when two criteria are satisfied: 1) identical visual words and 2) small Hamming distance between their HE signatures. The extension of HE estimates the matching strength between feature $f_1$ and $f_2$ reversely to the Hamming distance by an exponential function:

，其中$b(x) = (b_1(x),...,b_{p_b}(x))$是由此产生的尺寸$p_b$的载体。二进制功能$b(x)$作为特征匹配的第二重校验。当满足以下两个标准时，一对局部特征可以认为是匹配的：1) 它们是同一个视觉词汇；2) 它们的 HE 哈希值距离很小。HE 的扩展方法通过指数函数估计特征。他估计特征$f_1$和$f_2$与 Hamming 距离的匹配强度：：

$$w_{HE}(f_1, f_2) = \exp(-\frac{\mathcal{H}(b(x_1), b(x_2))}{2\gamma^2}),$$

where $b(x_1)$ and $b(x_2)$ are the HE binary vector of $f_1$ and $f_2$ , respectively, $\mathcal{H}(\cdot,\cdot)$ computes the Hamming distance between two binary vectors, and $\gamma$ is a weighting parameter. As shown in Fig. performance_improvement, HE and its weighted version improves accuracy considerably in 2008 and 2010.

，其中$b(x_1)$和$b(x_2)$是$f_1$和$f_2$的二进制向量，分别，$\mathcal{H}(\cdot,\cdot)$计算两个二值向量的汉明距离，$\gamma$是权重参数。如图 6 所示，HE 及其加权版本在 2008 和 2010 年准确率大大提高。

Applications of HE include video copy detection , image classification and re-ranking . For example, in image classification, patch matching similarity is efficiently estimated by HE which is integrated into linear kernel-based SVM . In image re-ranking, Tolias et al. use lower HE thresholds to find strict correspondences which resemble those found by RANSAC, and the resulting image subset is more likely to contain true positives for query reformulation. The improvement over HE has been observed in a number of works, especially from the view of match kernel . To reduce the information loss on the query side, Jain et al. propose a vector-to-binary distance comparison. It exploits the vector-to-hyperplane distance while retaining the efficiency of the inverted index. Further, Qin et al. design a higher-order match kernel within a probabilistic framework and adaptively normalize the local feature distances by the distance distribution of false matches. This method is in the spirit similar to , in which the word-word distance, instead of the feature-feature distance , is normalized, according to the neighborhood distribution of each visual word. While the average distance between a word to its neighbors is regularized to be almost constant in , the idea of democratizing the contribution of individual embeddings has later been employed in . In , Tolias et al. show that VLAD and HE share similar natures and propose a new match kernel which trades off between local feature aggregation and feature-to-feature matching, using a similar matching function to . They also demonstrate that using more bits (e.g., 128) in HE is superior to the original 64 bits scheme at the cost of decreased efficiency. Even more bits (256) are used in , but this method may be prone to relatively low recall.
HE 应用于视频拷贝检测、图像分类和重排序等场合。例如，在图像分类中，例如，在图像分类中，将 HE 集成到基于线性核的 SVM 中，有效地提高了了图像块匹配相似度的速度。在图像重排序任务中，Tolias 等人使用更低的 HE 阈值来找到类似于 RANSAC 得到的（图像局部特征）严格对应关系，并且得到的图像子集更可能包含真正应查得的图像。

Fig. 5: False match removal by (A) HE [13], (B) local-local feature fusion, and (C) local-global feature fusion.

### Other Important Issues 其他重要问题

#### Feature FusionLocal-local fusion. 功能融合

Local-local fusion. A problem with the SIFT feature is that only local gradient description is provided. Other discriminative information encoded in an image is still not leveraged. In Fig. 5 (B), a pair of false matches cannot be rejected by HE due to their similarity in the SIFT space, but the fusion of other local (or regional) features may correct this problem. A good choice for local-local fusion is to couple SIFT with color descriptors. The usage of color-SIFT descriptors can partially address the trade-off between invariance and discriminative ability. Evaluation has been conducted on several recognition benchmarks [93] of the descriptors such as HSV-SIFT [94], HueSIFT [95] and OpponentSIFT [93]. Both HSV-SIFT and HueSIFT are scale-invariant and shift-invariant. OpponentSIFT describes all the channels in the opponent color space using the SIFT descriptor and is largely robust to the light color changes. In [93], OpponentSIFT is recommended when no prior knowledge about the datasets is available. In more recent works, the binary color signatures are stored in the inverted index [96], [31]. Despite the good retrieval accuracy on some datasets, the potential problem is that intensive variation in illumination may compromise the effectiveness of colors.

Local-global fusion. Local and global features describe images from different aspects and can be complementary. In Fig. 5 (C), when local (and regional) cues are not enough to reject a false match pair, it would be effective to further incorporate visual information from a larger context scale. Early and late fusion are two possible ways. In early fusion, the image neighborhood relationship mined by global features such as FC8 in AlexNet [6] is fused in the SIFT-based inverted index [72]. In late fusion, Zhang el al. [97] build an offline graph for each type of feature, which is subsequently fused during the online query. In an improvement of [97], Deng et al. [98] add weakly supervised anchors to aid graph fusion. Both works on the rank level. For score-level fusion, automatically learned category-specific attributes are combined with pre-trained category-level information [99]. Zheng et al. [86] propose the query-adaptive late fusion by extracting a number of features (local or global, good or bad) and weighting them in a query-adaptive manner.

#### Geometric Matching 几何学上的匹配问题

A frequent concern with the BoW model is the lack of geometric constraints among local features. Geometric verification can be used as a critical pre-processing step various scenarios, such as query expansion [100, 101], feature selection [81], database-side feature augmentation [81], [33], large-scale object mining [102], etc. The most well-known method for global spatial verification is RANSAC [12]. It calculates affine transformations for each correspondence repeatedly which are verified by the number of inliers that fit the transformation. RANSAC is effective in re-ranking a subset of top-ranked images but has efficiency problems. As a result, how to efficiently and accurately incorporate spatial cues in the SIFT-based framework has been extensively studied.

A good choice is to discover the spatial context among local features. For example, visual phrases [103, 104, 105, 106] are generated among individual visual words to provide more strict matching criterion. Visual word co-occurrences in the entire image are estimated [107] and aggregated [108], while in [109, 110], [29] visual word clusters within local neighborhoods are discovered. Visual phrases can also be constructed from adjacent image patches [103], random spatial partitioning [106], and localized stable regions [29] such as MSER [28].

Another strategy uses voting to check geometric consistency. In the voting space, a bin with a larger value is more likely to represent the true transformation. An important work is weak geometrical consistency (WGC) [13], which focuses on the difference in scale and orientation between matched features. The space of difference is quantized into bins. Hough voting is used to locate the subset of correspondences similar in scale or orientation differences. Many later works can be viewed as extensions of WGC. For example, the method of Zhang et al. [21] can be viewed as WGC using x, y offsets instead of scale and orientation. This method is invariant to object translations, but may be sensitive to scale and rotation changes due to the rigid coordinate quantization. To regain the scale and the rotation variance, Shen et al. [111] quantize the angle and scale of the query region after applying several transformations. A drawback of [111] is that query time and memory cost are both increased. To enable efficient voting and alleviate quantization artifacts, Hough pyramid matching (HPM) [112] distributes the matches over a hierarchical partition of the transformation space. HPM trades off between flexibility and accuracy and is very efficient. Quantization artifact can also be reduced by allowing a single correspondence to vote for multiple bins [113]. HPM and [113] are much faster than RANSAC and can be viewed as extensions in the rotation and the scale invariance to the weak geometry consistency proposed along with Hamming Embedding [13]. In [114], a rough global estimate of orientation and scale changes is made by voting, which is used to verify the transformation obtained by the matched features. A recent method [115] combines the advantage of hypothesis-based methods such as RANSAC [12] and voting-based methods [112, 21, 113, 114]. Possible hypothesises are identified by voting and later verified and refined. This method inherits efficiency from voting and supports query expansion since it outputs an explicit transformation and a set of inliers.
BoW 模型的一个常见问题是缺乏局部特征间的几何约束。几何验证可以用作各种场景的关键预处理步骤，例如拓展查询，特征选择，数据库端的特征增强，大规模咯物体挖掘等。著名的全局空间验证方法是 RANSAC。RANSAC 它重复计算每个对应的仿射变换，并通过适合变换的内点数来验证。RANSAC 算法有效地重新排列排名最高的图像的子集，但存在效率问题。最终，如何在 SIFT 为基础的框架下有效、准确地结合空间信息被广泛地研究。

#### Query Expansion 拓展查询

As a post-processing step, query expansion (QE) significantly improves the retrieval accuracy. In a nutshell, a number of top-ranked images from the original rank list are employed to issue a new query which is in turn used to obtain a new rank list. QE allows additional discriminative features to be added to the original query, thus improving recall.

In instance retrieval, Chum et al. [100] are the first to exploit this idea. They propose the average query expansion (AQE) which averages features of the top-ranked images to issue the new query. Usually, spatial verification [12] is employed for re-ranking and obtaining the ROIs from which the local features undergo average pooling. AQE is used by many later works [17, 10, 24] as a standard tool. The recursive AQE and the scale-band recursive QE are effective improvement but incur more computational cost [100]. Four years later, Chum et al. [101] improve QE from the perspectives of learning background confusers, expanding the query region and incremental spatial verification. In [33], a linear SVM is trained online using the top-ranked and bottom-ranked images as positive and negative training samples, respectively. The learned weight vector is used to compute the average query. Other important extensions include “hello neighbor” based on reciprocal neighbors [116], QE with rank-based weighting [111], Hamming QE [89] (see Section 3.5), etc.

#### Small Object Retrieval 小目标检索

Retrieving objects that cover a small portion of images is a challenging task due to 1) the few detected local features and 2) the large amount of background noise. The Instance Search task in the TRECVID campaign [117] and the task of logo retrieval are important venues/applications for this task.

Generally speaking, both TRECVID and logo retrieval can be tackled with similar pipelines. For keypoint-based methods, the spatial context among the local features is important to discriminative target objects from others, especially in cases of rigid objects. Examples include [118, 119, 120]. Other effective methods include burstiness handling [77] (discussed in Section 3.4.3), considering the different inlier ratios between the query and target objects [121], etc. In the second type of methods, effective region proposals [122] or multi-scale image patches [123] can be used as object region candidates. In [123], a recent state-of-the-art method, a regional diffusion mechanism based on neighborhood graphs is proposed to further improve the recall of small objects.

## CNN-based Image Retrieval 基于 CNN 的图像检索系统

CNN-based retrieval methods have constantly been proposed in recent years and are gradually replacing the hand-crafted local detectors and descriptors. In this survey, CNN-based methods are classified into three categories: using pre-trained CNN models, using fine-tuned CNN models and hybrid methods. The first two categories compute the global feature with a single network pass, and the hybrid methods may require multiple network passes (see Fig. 2).

### Retrieval Using Pre-trained CNN Models 使用预训练 CNN 模型的图像检索系统

This type of methods is efficient in feature computation due to the single-pass mode. Given the transfer nature, its success lies in the feature extraction and encoding steps. We will first describe some commonly used datasets and networks for pre-training, and then the feature computation process.

#### Pre-trained CNN Models 预训练的 CNN 模型

Popular CNN architectures. Several CNN models serve as good choices for extracting features, including AlexNet , VGGNet , GoogleNet and ResNet , which are listed in Table CNN-models. Briefly, CNN can be viewed as a set of non-linear functions and is composed of a number of layers such as convolution, pooling, non-linearities, etc. CNN has a hierarchical structure. From bottom to top layers, the image undergoes convolution with filters, and the receptive field of these image filters increases. Filters in the same layer have the same size but different parameters. AlexNet was proposed the earliest among these networks, which has five convolutional layers and three fully connected (FC) layers. It has 96 filters in the first layer of sizes $11\times11\times3$ and has 256 filters of size $3\times 3\times 192$ in the 5th layer. Zeiler et al. observe that the filters are sensitive to certain visual patterns and that these patterns evolve from low-level bars in bottom layers to high-level objects in top layers. For low-level and simple visual stimulus, the CNN filters act as the detectors in the local hand-crafted features, but for the high-level and complex stimulus, the CNN filters have distinct characteristics that depart from SIFT-like detectors. AlexNet has been shown to be outperformed by newer ones such as VGGNet, which has the largest number of parameters. ResNet and GoogleNet won the ILSVRC 2014 and 2015 challenges, respectively, showing that CNNs are more effective with more layers. A full review of these networks is beyond the scope of this paper, and we refer readers to , for details.

Datasets for pre-training. Several large-scale recognition datasets are used for CNN pre-training. Among them, the ImageNet dataset [23] is mostly commonly used. It contains 1.2 million images of 1000 semantic classes and is usually thought of as being generic. Another data source for pre-training is the Places-205 dataset [129] which is twice as large as ImageNet but has five times fewer classes. It is a scene-centric dataset depicting various indoor and outdoor scenes. A hybrid dataset combining the Places-205 and the ImageNet datasets has also been used for pre-training [129]. The resulting HybridNet is evaluated in [130, 131, 125, 126] for instance retrieval.

models size # layers training Set used in
OverFeat [132] 144M 6+3 ImageNet [7]
AlexNet [6] 60M 5+3 ImageNet [22, 133]
PlacesNet [129] Places [130, 131]
HybridNet [129] ImageNet+Places [130, 131]
VGGNet [124] 138M 13+3 ImageNet [10]
GoogleNet [125] 11M 22 ImageNet [9]
ResNet [126] 44.6M 101 ImageNet n.a

TABLE II: Pre-trained CNN models that can be used.

The transfer issue. Comprehensive evaluation of various CNNs on instance retrieval has been conducted in several recent works [130, 131, 134]. The transfer effect is mostly concerned. It is considered in [130] that instance retrieval, as a target task, lies farthest from the source, i.e., ImageNet. Studies reveal some critical insights in the transfer process. First, during model transfer, features extracted from different layers exhibit different retrieval performance. Experiments confirm that the top layers may exhibit lower generalization ability than the layer before it. For example, for AlexNet pre-trained on ImageNet, it is shown that FC6, FC7, and FC8 are in descending order regarding retrieval accuracy [130]. It is also shown in [134, 10] that the pool5 feature of AlexNet and VGGNet is even superior to FC6 when proper encoding techniques are employed. Second, the source training set is relevant to retrieval accuracy on different datasets. For example, Azizpour et al. [130] report that HybridNet yields the best performance on Holidays after PCA. They also observe that AlexNet pre-trained on ImageNet is superior to PlacesNet and HybridNet on the Ukbench dataset [11] which contains common objects instead of architectures or scenes. So the similarity of the source and target plays a critical role in instance retrieval when using a pre-trained CNN model.

#### Feature Extraction 特征提取

The most straightforward idea is to extract the descriptor from the fully-connected (FC) layer of the network , e.g., the 4,096-dim FC6 or FC7 descriptor in AlexNet. The FC descriptor is generated after layers of convolutions with the input image, has a global receptive field, and thus can be viewed as a global feature. It yields fair retrieval accuracy under Euclidean distance and can be improved with power normalization . Many recent retrieval methods focus on local descriptors in the intermediate layers. In these methods, lower-level convolutional filters (kernels) are used to detect local visual patterns. Viewed as local detectors, these filters have a smaller receptive field and are densely applied on the entire image. Compared with the global FC feature, local detectors are more robust to image transformations such as truncation and occlusion, in ways that are similar to the local invariant detectors (Section local_feature_extraction).

FC 描述符。 最直接的想法就是网络的全连接层（FC layer）提取描述符，在 AlexNet 中就是 FC6 或 FC7 中的描述符。FC 描述符是在与输入图像卷积的层之后生成的，具有全局表示性，因此可以被视为全局特征。它在欧几里德距离下产生较好的检索精度，并且可以使用指数归一化来提高检索精度。

Local descriptors are tightly coupled with these intermediate local detectors, i.e., they are the responses of the input image to these convolution operations. In other words, after the convolutions, the resulting activation maps can be viewed as a feature ensemble, which is called the column feature in this survey. For example in AlexNet , there are $n=96$ detectors (convolutional filters) in the 1st convolutional layer. These filters produces $n=96$ heat maps of size $27\times27$ (after max pooling). Each pixel in the maps has a receptive field of $19\times19$ and records the response of the image w.r.t the corresponding filter . The column feature is therefore of size $1\times1\times96$ (Fig. pipeline) and can be viewed as a description of a certain patch in the original image. Each dimension of this descriptor denotes the level of activation of the corresponding detector and resembles the SIFT descriptor to some extent. The column feature initially appears in , where Razavian et al. first do max-pooling over regularly partitioned windows on the feature maps and then concatenate them across all filter responses, yielding column-like features. In , column features from multiple layers of the networks are concatenated, forming the hypercolumn feature.

#### Feature Encoding and Pooling 特征编码与池化

When column features are extracted, an image is represented by a set of descriptors. To aggregate these descriptors into a global representation, currently two strategies are adopted: encoding and direct pooling (Fig. pipeline).

Encoding A set of column features resembles a set of SIFT features. So standard encoding schemes can be directly employed. The most commonly used methods are VLAD and FV . A brief review of VLAD and FV can be seen in Section encoding_small. A milestone work is , in which the column features are encoded into VLAD for the first time. This idea was later extended to CNN model fine-tuning . The BoW encoding can also be leveraged, as the case in . The column features within each layer are aggregated into a BoW vector which is then concatenated across the layers. An exception to these fix-length representations is , in which the column features are quantized with a codebook of size 25k and an inverted index is employed for efficiency.

PoolingA major difference between the CNN column feature and SIFT is that the former has an explicit meaning in each dimension, i.e., the response of a particular region of the input image to a filter. Therefore, apart from the encoding schemes mentioned above, direct pooling techniques can produce discriminative features as well. A milestone work in this direction consists in the Maximum activations of convolutions (MAC) proposed by Tolias *et al.

Without distorting or cropping images, MAC computes a global descriptor with a single forward pass. Specifically, MAC calculates the maximum value of each intermediate feature map and concatenates all these values within a convolutional layer. In its multi-region version, the integral image and an approximate maximum operator are used for fast computation. The regional MAC descriptors are subsequently sum-pooled along with a series of normalization and PCA-whitening operations . We also note in this survey that several other works , also employ similar ideas with in employing max or average pooling on the intermediate feature maps and that Razavian et al. are the first. It has been observed that the last convolutional layer (e.g., pool5 in VGGNet), after pooling usually yields superior accuracy to the FC descriptors and the other convolutional layers . Apart from direct feature pooling, it is also beneficial to assign some specific weights to the feature maps within each layer before pooling. In , Babenko et al. propose the injection of the prior knowledge that objects tend to be located toward image centers, and impose a 2-D Gaussian mask on the feature maps before sum pooling. Xie et al. improve the MAC representation by propagating the high-level semantics and spatial context to low-level neurons for improving the descriptive ability of these bottom-layer activations. With a more general weighting strategy, Kalantidis et al. perform both feature map-wise and channel-wise weighing, which aims to highlight the highly active spatial responses while reducing burstiness effects.

### Image Retrieval with Fine-Tuned CNN Models 使用微调 CNN 模型的图像检索系统

Although pre-trained CNN models have achieved impressive retrieval performance, a hot topic consists in fine-tuning the CNN model on specific training sets. When a fine-tuned CNN model is employed, the image-level descriptor is usually generated in an end-to-end manner, i.e., the network will produce a final visual representation without additional explicit encoding or pooling steps.

#### Datasets for Fine-Tuning 用于微调网络的数据集

The nature of the datasets used in fine-tuning is the key to learning discriminative CNN features. ImageNet only provides images with class labels. So the pre-trained CNN model is competent in discriminating images of different object/scene classes, but may be less effective to tell the difference between images that fall in the same class (e.g., architecture) but depict different instances (e.g., Eiffel Tower and Notre-Dame). Therefore, it is important to fine-tune the CNN model on task-oriented datasets.

name # images # classes content
Landmarks [8] 213,678 672 Landmark
3D Landmark [24] 163,671 713 Landmark
Tokyo TM [137] 112,623 n.a Landmark
MV RGB-D [142] 250,000 300 House. object
Product [143] 101,945×2 n.a Furniture

TABLE III: Statistics of instance-level datasets having been used in fine-tuning.

The datasets having been used for fine-tuning in recent years are shown in Table ft_datasets. Buildings and common objects are the focus. The milestone work on fine-tuning is . It collects the by a semi-automated approach: automated searching for the popular landmarks in Yandex search engine, followed by a manual estimation of the proportion of relevant image among the top ranks. This dataset contains 672 classes of various architectures, and the fine-tuned network produces superior features on landmark related datasets such as Oxford5k and Holidays , but has decreased performance on Ukbench where common objects are presented. Babenko et al. have also fine-tuned CNNs on the containing turntable views of 300 household objects, in order to improve performance on Ukbench. The Landmark dataset is later used by Gordo et al. for fine-tuning, after an automatic cleaning approach based on SIFT matching. In , Radenoviet al. employ the retrieval and Structure-From-Motion methods to build models so that images depicting the same architecture can be grouped. Using this labeled dataset, the linear discriminative projections (denoted as $L_w$ in Table compare_methods_datasets) outperform the previous whitening technique . Another dataset called is collected using Google Street View Time Machine which provides images depicting the same places over time . While most of the above datasets focus on landmarks, Bell et al. build a consisting of furniture by developing a crowd-sourced pipeline to draw connections between in-situ objects and the corresponding products. It is also feasible to fine-tune on the query sets suggested in , but this method may not be adaptable to new query types.

#### Networks in Fine-Tuning 微调的网络

The CNN architectures used in fine-tuning mainly fall into two types: the classification-based network and the verification-based network. The classification-based network is trained to classify architectures into pre-defined categories. Since there is usually no class overlap between the training set and the query images, the learned embedding e.g., FC6 or FC7 in AlexNet, is used for Euclidean distance based retrieval. This train/test strategy is employed in , in which the last FC layer is modified to have 672 nodes corresponding to the number of classes in the Landmark dataset. The verification network may either use a siamese network with pairwise loss or use a triplet loss and has been more widely employed for fine-tuning. A standard siamese network based on AlexNet and the contrastive loss is employed in . In , Radenoviet al. propose to replace the FC layers with a MAC layer . Moreover, with the 3D architecture models built in , training pairs can be mined. Positive image pairs are selected based on the number of co-observed 3D points (matched SIFT features), while hard negatives are defined as those with small distances in their CNN descriptors. These image pairs are fed into the siamese network, and the contrastive loss is calculated from the $\ell_2$ normalized MAC features. In a concurrent work to , Gordo et al. fine-tune a triplet-loss network and a region proposal network on the Landmark dataset . The superiority of consists in its localization ability, which excludes the background in feature learning and extraction. In both works, the fine-tuned models exhibit state-of-the-art accuracy on landmark retrieval datasets including Oxford5k, Paris6k and Holidays, and also good generalization ability on Ukbench (Table compare_methods_datasets). In , a VLAD-like layer is plugged in the network at the last convolutional layer which is amenable to training via back-propagation. Meanwhile, a new triplet loss is designed to make use of the weakly supervised Google Street View Time Machine data.

### Hybrid CNN-based Methods 基于 CNN 模型的混合式方法

For the hybrid methods, multiple network passes are performed. A number of image patches are generated from an input image, which are fed into the network for feature extraction before an encoding/pooling stage. Since the manner of detector + descriptor is similar to SIFT-based methods, we call this method type hybrid. It is usually less efficient than the single-pass methods.

#### Feature Extraction 特征提取

In hybrid methods, the feature extraction process consists of patch detection and description steps. For the first step, the literature has seen three major types of region detectors. The first is grid image patches. For example, in [22], a two-scale sliding window strategy is employed to generate patches. In [7], the dataset images are first cropped and rotated, and then divided into patches of different scales, the union of which covers the whole image. The second type is invariant keypoint/region detectors. For instance, the difference of Gaussian feature points are used in [145]; the MSER region detector is leveraged in [146]. Third, region proposals also provide useful information on the locations of the potential objects. Mopuri et al. [147] employ selective search [148] to generate image patches, while EdgeBox [149] is used in [150]. In [144], the region proposal network (RPN) [151] is applied to locate the potential objects in an image.

The use of CNN as region descriptors is validated in [146], showing that CNN is superior to SIFT in image matching except on blurred images. Given the image patches, the hybrid CNN method usually employs the FC or pooled intermediate CNN features. Examples using the FC descriptors include [22, 7, 152, 147]. In these works, the 4,096-dim FC features are extracted from the multi-scale image regions [22, 7, 152] or object proposals [147]. On the other hand, Razavian et al. [133] also uses the intermediate descriptors after max-pooling as region descriptors.

The above methods use pre-trained models for patch feature extraction. Based on the hand-crafted detectors, patch descriptors can also be learned through CNN in either supervised [153] or unsupervised manner [145], which improves over the previous works on SIFT descriptor learning [45], [34]. Yi et al. [154] further propose an end-to-end learning method integrating region detector, orientation estimator and feature descriptor in a single pipeline.

《Descriptor matching with convolutional neural networks: a comparison to sift》证实了 CNN 一类的区域描述是有效的，并且在出模糊图像之外的图像匹配任务繁重要优于 SIFT 描述符。对于给定的图像块，混合 CNN 方法通常使用全连接层或池化的方法来整合 CNN 特征，相关文献对此均有研究。这些研究从多尺度的图像区域中提取 4096 维 FC 特征或目标建议区域。另一方面，Razavian 等人还在最大池化后采用中间描述符来作为区域描述符。

#### Feature Encoding and Indexing

The encoding/indexing procedure of hybrid methods resembles SIFT-based retrieval, *e.g., *VLAD/FV encoding under a small codebook or the inverted index under a large codebook.

The VLAD/FV encoding, such as [22, 147], follow the standard practice in the case of SIFT features [15, 14], so we do not detail here. On the other hand, several works exploit the inverted index on the patch-based CNN features [155, 156, 139]. Again, standard techniques in SIFT-based methods such as HE are employed [156]. Apart from the above-mentioned strategies, we notice that several works [7, 133, 152] extract several region descriptors per image to do a many-to-many matching, called “spatial search” [7]. This method improves the translation and scale invariance of the retrieval system but may encounter efficiency problems. A reverse strategy to applying encoding on top of CNN activations is to build a CNN structure (mainly consisting of FC layers) on top of SIFT-based representations such as FV. By training a classification model on natural images, the intermediate FC layer can be used for retrieval [157].

VLAD/FV 编码过程紧随 SIFT 特征提取后，在上文已经详细描述过这样的流程，不再赘述。另一方面，有一些工作研究探索了图像块的 CNN 特征的倒排索引。同样，在 SIFT 方法流程中诸如 HE 之类的编码方法也被使用。除了上述提到的编码策略，我们注意到《Cnn features off-the-shelf: an astounding baseline for recognition》，《Visual instance retrieval with deep convolutional networks》，《Image classification and retrieval are one》这些工作提取每个图像的多个区域描述符进行多对多匹配，这种方法称为称为“空间搜索”。该方法提高了检索系统对平移和尺度变化的鲁棒性，但可能会遇到效率问题。另一种使用 CNN 最高层特征编码的策略是在基于 SIFT 编码（如 FV）的最后面建立一个 CNN 结构（主要由全连接层组成）。通过在自然图像上训练一个分类模型，中间的全连接层可以被用来进行检索任务。

### Discussions 讨论

#### Relationship between SIFT- and CNN-based Methods 基于 SIFT 和 CNN 的方法间的关系

In this survey, we categorize current literature into six fine-grained classes. The differences and some representative works of the six categories are summarized in Table I and Table V. Our observation goes below.

First, the hybrid method can be viewed as a transition zone from SIFT- to CNN-based methods. It resembles the SIFT-based methods in all the aspects except that it extracts CNN features as the local descriptor. Since the network is accessed multiple times during patch feature extraction, the efficiency of the feature extraction step may be compromised.

Second, the single-pass CNN methods tend to combine the individual steps in the SIFT-based and hybrid methods. In Table V, the “pre-trained single-pass” category integrates the feature detection and description steps; in the “fine-tuned single-pass” methods, the image-level descriptor is usually extracted in an end-to-end mode, so that no separate encoding process is needed. In [17], a “PCA” layer is integrated for discriminative dimension reduction, making a further step towards end-to-end feature learning.

Third, fixed-length representations are gaining more popularity due to efficiency considerations. It can be obtained by aggregating local descriptors (SIFT or CNN) [15], [18, 22], [9], direct pooling [147], [10], or end-to-end feature computation [8, 17]. Usually, dimension reduction methods such as PCA can employed on top of the fixed-length representations, and ANN search methods such as PQ [15] or hashing [47] can be used for fast retrieval.

#### Hashing and Instance Retrieval 哈希与实例检索

Hashing is a major solution to the approximate nearest neighbor problem. It can be categorized into locality sensitive hashing (LSH) and learning to hash. LSH is data-independent and is usually outperformed by learning to hash, a data-dependent hashing approach. For learning to hash, a recent survey categorizes it into quantization and pairwise similarity preserving. The quantization methods are briefly discussed in Section encoding_small. For the pairwise similarity preserving methods, some popular hand-crafted methods include Spectral hashing , LDA hashing , etc. Recently, hashing has seen a major shift from hand-crafted to supervised hashing with deep neural networks. These methods take the original image as input and produce a learned feature before binarization . Most of these methods, however, focus on class-level image retrieval, a different task with instance retrieval discussed in this survey. For instance retrieval, when adequate training data can be collected, such as architecture and pedestrians, the deep hashing methods may be of critical importance.

## Experimental Comparisons 实验比较

### Image Retrieval Datasets 图像检索数据集

Five popular instance retrieval datasets are used in this survey. Statistics of these datasets can be accessed in Table IV.

name # images # queries content
Holidays [13] 1,491 500 scene
Ukbench [11] 10,200 10,200 common objects
Paris6k [25] 6,412 55 buildings
Oxford5k [12] 5,062 55 buildings
Flickr100k [25] 99,782 - from Flickr’s popular tags

TABLE IV: Statistics of popular instance-level datasets.

Holidays [13] is collected by Jégou et al. from personal holiday albums, so most of the images are of various scene types. The database has 1,491 images composed of 500 groups of similar images. Each image group has 1 query, totaling 500 query images. Most SIFT-based methods employ the original images, except [71, 32] which manually rotate the images into upright orientations. Many recent CNN-based methods [140], [137], [16] also use the rotated version of Holidays. In Table V, results of both versions of Holidays are shown (separated by “/”). Rotating the images usually brings 2-3% mAP improvement.
Holidays 数据集由 Jégou 等人从个人假日相册中收集，因此图像包含各种各样的场景。该数据库由 500 组 1,491 幅相似图像组成，每组图像有一条查询记录，共计 500 条。除了《Efficient representation of local geometry for large scale object retrieval》和《Learning a fine vocabulary》，大多数基于 SIFT 的方法手动地将图像旋转成直立方向。最近许多基于 CNN 的方法也使用了旋转版的 Holidays 数据集。在表 5 中这两个版本数据集上的结果用”/“间隔，旋转图像可以带来 2%-3% 的 mAP 值。

Ukbench [11] consists of 10,200 images of various content, such as objects, scenes, and CD covers. All the images are divided into 2,550 groups. Each group has four images depicting the same object/scene, under various angles, illuminations, translations, etc. Each image in this dataset is taken as the query in turn, so there are 10,200 queries.
Ukbench 数据集包括 10,200 种不同内容的图像，如物体、场景和 CD 封面。所有图像被分为 2,550 组，每个组有四个图像描述相同的物体/场景，在不同的角度，照明，形变，等情况下的表现。该数据集中的每个图像依次作为查询记录，因此有 10,200 条查询记录。

Oxford5k [12] is collected by crawling images from Flickr using the names of 11 different landmarks in Oxford. A total of 5,062 images form the image database. The dataset defines five queries for each landmark by hand-drawn bounding boxes, so that 55 query Regions of Interest (ROI) exist in total. Each database image is assigned one of four labels, good, OK, junk, or bad. The first two labels are true matches to the query ROIs, while “bad” denotes the distractors. In junk images, less than 25% of the objects are visible, or they undergo severe occlusion or distortion, so these images have zero impact on retrieval accuracy.

Oxford5k 数据集用牛津 11 个地标名从 Flickr 网站爬取共计 5062 幅图像组建数据集。该数据集通过手绘边界框为每个地标的定义五个查询记录，从而总共存在 55 个感兴趣区域（ROI）查询记录。每个数据库图像被分配了好的还可以的垃圾的，或坏的四个标签之一。前两个标签表示与查询的感兴趣区域是匹配的，而“坏”表示不匹配。在糟糕的图像中，只有不到 25％的对象是可见的，或者它们遭受严重的遮挡或变形，因此这些图像对检索精度影响不大。
Flickr100k [25] contains 99,782 high resolution images crawled from Flickr’s 145 most popular tags. In literature, this dataset is typically added to Oxford5k to test the scalability of retrieval algorithms.

Flickr100k 数据集包括 99,782 张来 Flickr 网站 145 个最流行标签的高清图像。在文献中，通常将该数据集添加到 Oxford5k 中，以测试检索算法的可扩展性。

Paris6k [25] is featured by 6,412 images crawled from 11 queries on specific Paris architecture. Each landmark has five queries, so there are also 55 queries with bounding boxes. The database images are annotated with the same four types of labels as Oxford5k. Two major evaluation protocols exist for Oxford5k and Paris6k. For SIFT-based methods, the cropped regions are usually used as query. For CNN-based methods, some employ the full-sized query images [8, 137]; some follow the standard cropping protocol, either by cropping the ROI and feeding it into CNN [16] or extracting CNN features using the full image and selecting those falling in the ROI [144]. Using the full image may lead to mAP improvement. These protocols are used in Table V.

Paros6k 数据集从 11 指定的巴黎建筑查询中爬出 6,412 中图像。每个地标有五个查询记录，因此这个数据集同样有 55 个带有边界框的查询记录。数据库图像使用和 Oxford5k 一样的四种类型的标签作为 Oxford5k 数据集标签。针对 Oxford5k 和 Paris6k 数据集有两个评估准则。对于基于 SIFT 的方法，被裁剪的区域通常用于查询。对于基于 CNN 的方法，有些工作采用的是全尺寸图像，有些工作采用的是将裁剪的 ROI 传入 CNN 或者用 CNN 提取全图特征后再裁剪得到 ROI。使用完整的图像的方法可以提高 mAP 指标。详细的指标可以参见表 5。

TABLE V: Performance summarization of some representative methods of the six categories on the benchmarks. “+100k” –> the addition of Flickr100k into Oxford5k. “pw.” –> power low normalization [14]. “MP” –> max pooling. “SP” –> sum pooling. * and parentheses –> results are obtained with post-processing steps such as spatial verification or QE. \lx@sectionsign –> numbers are estimated from the curves. † numbers are reported by our implementation. For Holidays, results using the rotated images are presented after “/”. For Oxford5k (+100k) and Paris6k, results using the full-sized queries are shown after “/”. ♮ –> the full query image is fed into the network, but only the features whose centers fall into the query region of interest are aggregated. Note that in many fixed-length representations, ANN algorithms such as PQ are not used to report the results, but ANN can be readily applied after PCA during indexing.

Fig. 6: The state of the art over the years on the (a) Holidays, (b) Ukbench, and (c) Oxford5k datasets. Six fine-grained categories are summarized (see Section 2). For each year, the best accuracy of each category is reported. For the compact representations, results of 128-bit vectors are preferentially selected. The purple star denotes the results produced by 2,048-dim vectors [17], the best performance in fine-tuned CNN methods. Methods with a pink asterisk denote using rotated images on Holidays, full-sized queries on Oxford5k, or spatial verification and QE on Oxford5k (see Table V).

### Evaluation MetricsPrecision-recall. 评价指标

Recall denotes the ratio of returned true matches to the total number or true matches in the database, while precision refers to the fraction of true matches in the returned images. Given a subset of $n$ returned images, assuming there are $n_p$ true matches among them, and a total of $N_p$ true matches exist in the whole database, then recall $@n$ ( $r@n$ ) and precision $@n$ ( $p@n$ ) are calculated as $\frac{n_p}{N_p}$ and $\frac{n_p}{n}$ , respectively. In image retrieval, given a query image and its rank list, a precision-recall curve can be drawn on the (precision, recall) points $(r@1, p@1), (r@2, p@2), ..., (r@N, p@N)$ , where $N$ is the number of images in the database. . To more clearly record the retrieval performance, average precision (AP) is used, which amounts to the area under the precision-recall curve. Typically, a larger AP means a higher precision-recall curve and thus better retrieval performance. Since retrieval datasets typically have multiple query images, their respective APs are averaged to produce a final performance evaluation, i.e., the mean average precision (mAP). Conventionally, we use mAP to evaluate retrieval accuracy on the Oxford5k, Paris6k, and Holidays datasets.

**精准度-召回率。**recall 表示返回的 true 匹配与数据库中总数或 true 匹配的比率，而精度是指返回的图像中的真实匹配的分数。给定$n$返回的图像子集，假设它们之间存在$n_p$ TRUE 匹配，并且整个数据库中存在$N_p$ TRUE 匹配，然后召回$@n$（$r@n$）和精密$@n$（$p@n$）分别计算为$\frac{n_p}{N_p}$和$\frac{n_p}{n}$。在图像检索中，给定查询图像及其等级列表，可以在（精度，召回）点$(r@1, p@1), (r@2, p@2), ..., (r@N, p@N)$上绘制精确调用曲线，其中$N$是数据库中的图像数。

N-S 得分。 N-S 得分专用于 Ukbench 数据集，它是以 David Nistér 和 Henrik Stewénius 的名字来命名的。N-S 得分其实等价于精准度或者召回率，因为在 Ukbench 数据集中的每个查询在数据库中都有四个正确的匹配项。N-S 得分用总排名列表中前四中的真实匹配的平均数量来计算。

Fig. 7: The impact of feature dimension on retrieval accuracy. Compact (fixed-length) representations are shown, i.e., SIFT small voc., hybrid CNN methods, pre-trained CNN methods, and fine-tuned CNN methods. Curves with a pink asterisk on the end indicates using rotated images or full-sized queries on Holidays and Oxford5k (see Table V), resp.

### Comparison and Analysis 比较和分析

#### Performance Improvement Over the Years 多年来性能的改进

We present the improvement in retrieval accuracy over the past ten years in Fig. performance_improvement and the numbers of some representative methods in Table compare_methods_datasets. The results are computed using codebooks trained on independent datasets . We can clearly observe that the field of instance retrieval has constantly been improving. The baseline approach (HKM) proposed over ten years ago only yields a retrieval accuracy of 59.7%, 2.85, 44.3%, 26.6%, and 46.5% on Holidays, Ukbench, Oxford5k, Oxford5k+Flickr100k, and Paris6k, respectively. Starting from the baseline approaches , methods using large codebooks improve steadily when more discriminative codebooks , spatial constraints , and complementary descriptors are introduced. For medium-sized codebooks, the most significant accuracy advance has been witnessed in the years 2008-2010 with the introduction of Hamming Embedding and its improvements . From then on, major improvements come from the strength of feature fusion , , with the color and CNN features, especially on the Holidays and Ukbench datasets.

| method type | efficiency | accuracy |
| feat. ext. | retr. | mem. | train | generic | specific |
| - | - | - |
| SIFT large voc. | fair | high | fair | fair | high | fair |
| SIFT mid voc. | fair | low | low | fair | high | high |
| SIFT small voc. | fair | high | high | high | low | Low |
| CNN hybrid | low | varies | varies | varies | fair | fair |
| CNN pre-trained | high | high | high | high | high | fair |
| CNN fine-tuned | high | high | high | low | high | high |

TABLE VI: A summary of efficiency and accuracy comparison between different categories. Note: feature extraction time is estimated using CPUs and GPUs (as is usually done) for SIFT and CNN, resp. When using GPUs for SIFT extraction, the efficiency could be high as well.

On the other hand, CNN-based retrieval models have quickly demonstrated their strengths in instance retrieval. In the year 2012 when the AlexNet was introduced, the performance of the off-the-shelf FC features is still far from satisfactory compared with SIFT models during the same period. For example, the FC descriptor of AlexNet pre-trained on ImageNet yields 64.2%, 3.42, and 43.3% in mAP, N-S score, and mAP, respectively, on the Holidays, Ukbench, and Oxford5k datasets. These numbers are lower than by 13.85%, 0.14 on Holidays and Ukbench, respectively, and lower than by 31.9% on Oxford5k. However, with the advance in CNN architectures and fine-tuning strategies, the performance of the CNN-based methods is improving fast, being competitive on the Holidays and Ukbench datasets , and slightly lower on Oxford5k but with much smaller memory cost .

#### Accuracy Comparisons 准确率比较

The retrieval accuracy of different categories on different datasets can be viewed in Fig. 6, Table V and Table VI. From these results, we arrive at three observations.

First, among the SIFT-based methods, those with medium-sized codebooks [13, 31], [19] usually lead to superior (or competitive) performance, while those based on small codebook (compact representations) [15, 18, 56] exhibit inferior accuracy. On the one hand, the visual words in the medium-sized codebooks lead to relatively high matching recall due to the large Voronoi cells. The further integration of HE methods largely improves the discriminative ability, achieving a desirable trade-off between matching recall and precision. On the other hand, although the visual words in small codebooks have the highest matching recall, their discriminative ability is not significantly improved due to the aggregation procedure and the small dimensionality. So its performance can be compromised.

Second, among the CNN-based categories, the fine-tuned category [8, 17, 24] is advantageous in specific tasks (such as landmark/scene retrieval) which have similar data distribution with the training set. While this observation is within expectation, we find it interesting that the fine-tuned model proposed in [17] yields very competitive performance on generic retrieval (such as Ukbench) which has distinct data distribution with the training set. In fact, Babenko et al. [8] show that the CNN features fine-tuned on Landmarks compromise the accuracy on Ukbench. The generalization ability of [17] could be attributed to the effective training of the region proposal network. In comparison, using pre-trained models may exhibit high accuracy on Ukbench, but only yields moderate performance on landmarks. Similarly, the hybrid methods have fair performance on all the tasks, when it may still encounter efficiency problems [152, 7].

Third, comparing all the six categories, the “CNN fine-tuned” and “SIFT mid voc.” categories have the best overall accuracy, while the “SIFT small voc.” category has a relatively low accuracy.

Fig. 8: Memory cost vs. retrieval accuracy on Oxford5k. In the legend, the first 8 methods are based on large codebooks, while the last 7 use medium-sized codebooks. This figure shares the same legend with Fig. 7(c) except the newly added numbers (in black).

#### Efficiency Comparisons

Feature computation time. For the SIFT-based methods, the dominating step is local feature extraction. Usually, it takes 1-2s for a CPU to extract the Hessian-Affine region based SIFT descriptors for a 640×480 image, depending on the complexity (texture) of the image. For the CNN-based method, it takes 0.082s and 0.347s for a single forward pass of a 224×224 and 1024×768 image through VGG16 on a TitanX card, respectively. It is reported in [17] that four images (with largest side of 724 pixels) can be processed in 1 second. The encoding (VLAD or FV) time of the pre-trained column features is very fast. For the CNN Hybrid methods, extracting CNN features out of tens of regions may take seconds. Overall speaking, the CNN pre-trained and fine-tuned models are efficient in feature computation using GPUs. Yet it should be noted that when using GPUs for SIFT extraction, high efficiency could also be achieved.

Retrieval time. The efficiency of nearest neighbor search is high for “SIFT large voc.”, “SIFT small voc.”, “CNN pre-trained” and “CNN fine-tuned”, because the inverted lists are short for a properly trained large codebook, and because the latter three have a compact representation to be accelerated by ANN search methods like PQ [61]. Efficiency for the medium-sized codebook is low because the inverted list contains more postings compared to a large codebook, and the filtering effect of HE methods can only correct this problem to some extent. The retrieval complexity for hybrid methods, as mentioned in Section 4.3, may suffer from the expensive many-to-many matching strategy [7, 133, 152].

Training time. Training a large or medium-sized codebook usually takes several hours with AKM or HKM. Using small codebooks reduces the codebook training time. For the fine-tuned model, Gordo et al. [17] report using five days on a K40 GPU for the triplet-loss model. It may take less time for the siamese [24] or the classification models [8], but should still much longer than SIFT codebook generation. Therefore, in terms of training, those using direct pooling [10, 134] or small codebooks [15], [9] are more time efficient.

Memory cost. Table V and Fig. 8 show that the SIFT methods with large codebooks and the compact representations are both efficient in memory cost. But the compact representations can be compressed into compact codes [53] using PQ or other competing quantization/hashing methods, so their memory consumption can be further reduced. In comparison, the methods using medium-sized codebooks are the most memory-consuming because the binary signatures should be stored in the inverted index. The hybrid methods somehow have mixed memory cost because the many-to-many strategy requires storing a number of region descriptors per image [7, 152] while some others employ efficient encoding methods [22, 147].

Spatial verification and query expansion. Spatial verification which provides refined rank lists is often used in conjunction with QE. The RANSAC verification proposed in [12]The RANSAC verification proposed in has a complexity of $\mathcal{O}(z^2)$ , where $z$ is the number of matched features. So this method is computationally expensive. The ADV approach is less expensive with $\mathcal{O}(z\log z)$ complexity due to its ability to avoid unrelated Hough votes. The most efficient methods consist in which has a complexity of $\mathcal{O}(z)$ , and further outputs the transformation and inliers for QE.

Fig. 9: The impact of codebook size on SIFT-based methods using (a) large codebooks and (b) medium-sized codebooks on the Oxford5k dataset.

From the perspective of query expansion, since new queries are issued, search efficiency is compromised. For example, AQE [100] almost doubles the search time due to the new query. For the recursive AQE and the scale-band recursive QE [100], the search time is much longer because several new searches are conducted. For other QE variants [101], [33], the proposed improvements only add marginal cost compared to performing another search, so their complexity is similar to basic QE methods.

#### Important Parameters 重要的参数

We summarize the impact of codebook size on SIFT methods using large/medium-sized codebooks, and the impact of dimensionality on compact representations including SIFT small codebooks and CNN-based methods.

Codebook size. The mAP results on Oxford5k are drawn in Fig. 9, and methods using large/medium-sized codebooks are compared. Two observations can be made. First, mAP usually increases with the codebook size but may reach saturation when the codebook is large enough. This is because a larger codebook improves the matching precision, but if it is too large, matching recall is lower, leading to saturated or even compromised performance [12]. Second, methods using the medium-sized codebooks have more stable performance when codebook size changes. This can be attributed to HE [13], which contributes more for a smaller codebook, compensating the lower baseline performance.

Dimensionality. The impact of dimensionality on compact vectors is presented in Fig. 7. Our finding is that the retrieval accuracy usually remains stable under larger dimensions, and drops quickly when the dimensionality is below 256 or 128. Our second finding favors the methods based on region proposals [147], [17]. These methods demonstrate very competitive performance under various feature lengths, probably due to their superior ability in object localization.

#### Discussions 讨论

We provide a brief discussion on when to use CNN over SIFT and the other way around. The above discussions provide comparisons between the two features. On the one hand, CNN-based methods with fixed-length representations have advantages in nearly all the benchmarking datasets. Specifically, in two cases, CNN-based methods can be assigned with higher priority. First, for specific object retrieval (e.g., buildings, pedestrians) when sufficient training data is provided, the ability of CNN embedding learning can be fully utilized. Second, for common object retrieval or class retrieval, the pre-trained CNN models are competitive.

On the other hand, despite the usual advantages of CNN-based methods, we envision that the SIFT feature still has merits in some cases. For example, when the query or some target images are gray-scale, CNN may be less effective than SIFT because SIFT is computed on gray-scale images without resorting to color information. A similar situation involves when object color change is highly intense. In another example, for small object retrieval or when the queried object undergoes severe occlusions, the usage of local features like SIFT is favored. In applications like book/CD cover retrieval, we can also expect good performance out of SIFT due to the rich textures.

## Future Research Directions 未来的研究方向

### Towards Generic Instance Retrieval

A critical direction is to make the search engine applicable to generic search purpose. Towards this goal, two important issues should be addressed. First, large-scale instance-level datasets are to be introduced. While several instance datasets have been released as shown in Table ft_datasets, these datasets usually contain a particular type of instances such as landmarks or indoor objects. Although the RPN structure used by Gordo et al. has proven competitive on Ukbench in addition to the building datasets, it remains unknown if training CNNs on more generic datasets will bring further improvement. Therefore, the community is in great need of large-scale instance-level datasets or efficient methods for generating such a dataset in either a supervised or unsupervised manner. Second, designing new CNN architectures and learning methods are important in fully exploiting the training data. Previous works employ standard classification , pairwise-loss or Triplet-loss , CNN models for fine-tuning. The introduction of Faster R-CNN to instance retrieval is a promising starting point towards more accurate object localization . Moreover, transfer learning methods are also important when adopting a fine-tuned model in another retrieval task .

### Towards Specialized Instance Retrieval

To the other end, there are also increasing interests in specialized instance retrieval. Examples include place retrieval , pedestrian retrieval , vehicle retrieval , logo retrieval , etc. Images in these tasks have specific prior knowledge that can be made use of. For example in pedestrian retrieval, the recurrent neural network (RNN) can be employed to pool the body part or patch descriptors. In vehicle retrieval, the view information can be inferred during feature learning, and the license plate can also provide critical information when being captured within a short distance. Meanwhile, the process of training data collection can be further explored. For example, training images of different places can be collected via Google Street View . Vehicle images can be accessed either through surveillance videos or internet images. Exploring new learning strategies in these specialized datasets and studying the transfer effect would be interesting. Finally, compact vectors or short codes will also become important in realistic retrieval settings.

## Concluding Remarks 总结

This survey reviews instance retrieval approaches based on the SIFT and CNN features. According to the codebook size, we classify the SIFT-based methods into three classes: using large, medium-sized, and small codebook. According to the feature extraction process, the CNN-based methods are categorized into three classes, too: using pre-trained models, fine-tuned models, and hybrid methods. A comprehensive survey of the previous approaches is conducted under each of the defined categories. The category evolution suggests that the hybrid methods are in the transition position between SIFT and CNN-based methods, that compact representations are getting popular, and that instance retrieval is working towards end-to-end feature learning and extraction. Through the collected experimental results on several benchmark datasets, comparisons are made between the six method categories. Our findings favor the usage of CNN fine-tuning strategy, which yields competitive accuracy on various retrieval tasks and has advantages in efficiency. Future research may focus on learning more generic feature representations or more specialized retrieval tasks. The authors would like to thank the pioneer researchers in image retrieval and other related fields. This work was partially supported by the Google Faculty Award and the Data to Decisions Cooperative Research Centre. This work was supported in part to Dr. Qi Tian by ARO grant W911NF-15-1-0290, Faculty Research Gift Awards by NEC Laboratories of America and Blippar, and National Science Foundation of China (NSFC) 61429201.