SIFT Meets CNN: A Decade Survey of Instance Retrieval 当SIFT遇上CNN:图像检索任务十年总结
ABSTRACT
In the early days, content-based image retrieval (CBIR) was studied with global features. Since 2003, image retrieval based on local descriptors (de facto SIFT) has been extensively studied for over a decade due to the advantage of SIFT in dealing with image transformations. Recently, image representations based on the convolutional neural network (CNN) have attracted increasing interest in the community and demonstrated impressive performance. Given this time of rapid evolution, this article provides a comprehensive survey of instance retrieval over the last decade. Two broad categories, SIFT-based and CNN-based methods, are presented. For the former, according to the codebook size, we organize the literature into using large/medium-sized/small codebooks. For the latter, we discuss three lines of methods, i.e., using pre-trained or fine-tuned CNN models, and hybrid methods. The first two perform a single-pass of an image to the network, while the last category employs a patch-based feature extraction scheme. This survey presents milestones in modern instance retrieval, reviews a broad selection of previous works in different categories, and provides insights on the connection between SIFT and CNN-based methods. After analyzing and comparing retrieval performance of different categories on several datasets, we discuss promising directions towards generic and specialized instance retrieval.
在基于内容的图像检索技术(CBIR)发展早期,研究人员大多基于图像的全局特征进行研究。自2003年开始,由于SIFT特征在图像变换问题中的优异表现,十多年来基于局部描述算子(如SIFT描述算子)的图像检索方法一直被广泛研究。最近,基于卷积神经网络(CNN)的图像表示方法吸引了社区越来越多的关注,同时这种方法也展现出了令人赞叹的性能。我们领域正处于快速发展时期,本文对实例检索近十多年来的进展进行了综合且全面的调查研究,主要展示了基于SIFT和CNN特征的两类主要方法。对SIFT一类的方法,我们根据字典本大小,将相关文献按照字典的大/中/小规模进行组织。对CNN一类的方法,我们主要依据预训练模型,微调模型和混合模型进行分类和讨论。预训练模型和微调模型方法采用了单通道的图像输入方法而混合模型则采用了基于块的特征提取策略。本篇综述选取了在现代实例检索任务中先前的各类工作,展现了该任务中的里程碑时刻,并提出了关于SIFT与CNN的内在联系的见解。在分析与比较了各种方法在几个数据集上的检索性能后,我们分别讨论了通用实例检索和专用实例检索任务未来的发展前景。
Instance retrieval, SIFT, convolutional neural network, literature survey.
Introduction
Content-based image retrieval (CBIR) has been a long-standing research topic in the computer vision society. In the early 1990s, the study of CBIR truly started. Images were indexed by the visual cues, such as texture and color, and a myriad of algorithms and image retrieval systems have been proposed. A straightforward strategy is to extract global descriptors. This idea dominated the image retrieval community in the 1990s and early 2000s. Yet, a well-known problem is that global signatures may fail the invariance expectation to image changes such as illumination, translation, occlusion and truncation. These variances compromise the retrieval accuracy and limit the application scope of global descriptors. This problem has given rise to local feature based image retrieval.
基于内容的图像检索任务(CBIR)是计算机视觉领域一项由来已久的研究课题。CBIR研究在20世纪90年代早期正式开始,研究人员根据诸如纹理、颜色这样的视觉特征对图像建立索引,在这一时期大量的算法和图像检索系统被提出。其中一种简单明了的策略就是提取出图像的全局描述符,这种策略在1990s和2000s早期是图像检索社区研究的重点。然而,众所周知,全局描述符这种方法在诸如光照,形变,遮挡和裁剪这些情况下难以达到预想的效果。这些缺陷也导致了图像检索准确率的低下,也局限了全局描述符算法的应用范围。恰在这时,基于局部特征的图像检索算法给解决这一问题带来了曙光。
The focus of this survey is instance-level image retrieval. In this task, given a query image depicting a particular object/scene/architecture, the aim is to retrieve images containing the same object/scene/architecture that may be captured under different views, illumination, or with occlusions. Instance retrieval departs from class retrieval [1] in that the latter aims at retrieving images of the same class with the query. In the following, if not specified, we use “image retrieval” and “instance retrieval” interchangeably.
本篇综述主要关注于实例级的图像检索任务。在这个任务中,给定一张物体/场景/建筑类型的待查询图片,查询出包含拍摄自不同角度、光照或有遮挡的,含有相同物体/场景/建筑的图片。实例检索不同于类别检索任务,因为后者的目标是检索出同类别的图片。接下来,如果没有特别指出的话,“图像检索”和“实例检索”两个名词可以相互替代。
The milestones of instance retrieval in the past years are presented in Fig. 1, in which the times of the SIFT-based and CNN-based methods are highlighted. The majority of traditional methods can be considered to end in 2000 when Smeulders et al. [2] presented a comprehensive survey of CBIR “at the end of the early years”. Three years later (2003) the Bag-of-Words (BoW) model was introduced to the image retrieval community [3], and in 2004 was applied to image classification [4], both relying on the SIFT descriptor [5]. The retrieval community has since witnessed the prominence of the BoW model for over a decade during which many improvements were proposed. In 2012, Krizhevsky et al. [6] with the AlexNet achieved the state-of-the-art recognition accuracy in ILSRVC 2012, exceeding previous best results by a large margin. Since then, research focus has begun to transfer to deep learning based methods [7, 8, 9, 10], especially the convolutional neural network (CNN).
在图1中我们展示了多年来实例检索任务中的里程碑时刻,并且在图中着重标出了基于SIFT特征和CNN特征算法的提出的时间。2000年可以认为是大部分传统方法结束的时间节点,当时Smeulders等撰写了“图像检索早期发展的终结”这篇综述。三年后(2003),词袋模型(BoW)进入图像检索社区的视野,并在2004年结合了SIFT方法符被应用于图像分类任务。这后来的近10年时间里,社区见证了BoW模型的优越性,它给图像检索任务带来了各种提升。在2012年,Krizhevsky等人使用AlexNet神经网络模型在ILSRVC 2012上取得了当时世界上最高的识别准确率。从那以后,研究的重心开始向基于深度学习特别是卷积神经网络(CNN)的方法转移。
Fig. 1: Milestones of instance retrieval.After a survey of methods before the year 2000 by Smeulders et al [2], Sivic and Zisserman [3] proposed Video Google in 2003, marking the beginning of the BoW model. Then, the hierarchical k-means and approximate k-means were proposed by Stewénius and Nistér [11] and Philbin et al. [12], respectively, marking the use of large codebooks in retrieval. In 2008, Jégou et al. [13] proposed Hamming Embedding, a milestone in using medium-sized codebooks. Then, compact visual representations for retrieval were proposed by Perronnin et al. [14] and Jégou et al. [15] in 2010. Although SIFT-based methods were still moving forward, CNN-based methods began to gradually take over, following the pioneering work of Krizhevsky et al. [6]. In 2014, Razavian et al. [7] proposed a hybrid method extracting multiple CNN features from an image. Babenko et al. [8] were the first to fine-tune a CNN model for generic instance retrieval. Both [9, 10] employ the column features from pre-trained CNN models, and [10] inspires later state-of-the-art methods. These milestones are the representative works of the categorization scheme in this survey.
图1:图像检索里程碑
在Smeulders等 [ 2 ]在2000年之前对方法进行了调查之后,Sivic和Zisserman [ 3 ]在2003年提出了Video Google,标志着BoW模型的开始。然后,由Stewénius和Nistér [ 11 ]以及Philbin等人 提出了层次k-均值和近似k-均值*。[ 12 ]分别标记了在检索中使用大型码本。在2008年,Jégou等人。* [ 13 ]提出了汉明嵌入(Hamming Embedding),这是使用中型码本的里程碑。然后,Perronnin等人 提出了用于检索的紧凑视觉表示*。[ 14 ]和Jégou等。* [ 15 ]在2010年。尽管基于SIFT的方法仍在向前发展,但随着Krizhevsky等人 的开创性工作,基于CNN的方法开始逐渐被人们所接受*。[ 6 ]。在2014年,Razavian等人。* [ 7 ]提出了一种从图像中提取多个CNN特征的混合方法。Babenko等。 [ 8 ]是第一个为通用实例检索微调CNN模型的公司。既[ 9,10 ]采用柱从预先训练CNN模型的特征,和[ 10 ]激发后状态的最先进的方法。这些里程碑是本次调查中分类方案的代表作品。
The SIFT-based methods mostly rely on the BoW model. BoW was originally proposed for modeling documents because the text is naturally parsed into words. It builds a word histogram for a document by accumulating word responses into a global vector. In the image domain, the introduction of the scale-invariant feature transform (SIFT) makes the BoW model feasible . Originally, SIFT is comprised of a detector and descriptor, but which are used in isolation now; in this survey, if not specified, SIFT usually refers to the 128-dim descriptor, a common practice in the community. With a pre-trained codebook (vocabulary), local features are quantized to visual words. An image can thus be represented in a similar form to a document, and classic weighting and indexing schemes can be leveraged.
In recent years, the popularity of SIFT-based models seems to be overtaken by the convolutional neural network (CNN), a hierarchical structure that has been shown to outperform hand-crafted features in many vision tasks. In retrieval, competitive performance compared to the BoW models has been reported, even with short CNN vectors . The CNN-based retrieval models usually compute compact representations and employ the Euclidean distance or some approximate nearest neighbor (ANN) search methods for retrieval. Current literature may directly employ the pre-trained CNN models or perform fine-tuning for specific retrieval tasks. A majority of these methods feed the image into the network only once to obtain the descriptor. Some are based on patches which are passed to the network multiple times, a similar manner to SIFT; we classify them into hybrid methods in this survey.
基于SIFT特征的方法大多依赖于BoW模型。BoW模型最初是为解决文档建模问题而提出的,因为文本本身就是由单词组成的。它通过累加单词响应到一个全局向量来给文档建立单词直方图。在图像领域,尺度不变(SIFT)特征的引入使得BoW模型变得可行。最初,SIFT由检测器和描述符组成,但现在描述符被单独提取出来使用。在这篇综述中,如果没有特别指明的话,SIFT往往是指128维的描述符(译者注:OpenCV的SIFT实现也是默认生成128维向量),这也是社区的惯例。通过一个预训练的字典(译者注:补充说明一下,在工业界的项目中,待检索的图像往往有特定的范围,使用特定范围内的有代表性的图片构建出预训练字典可以取得比较好的效果),局部特征被量化表示为视觉词汇。一张图片能够被表示成类似文档的形式,这样就可以使用经典的权重索引方案。
近几年,CNN这种层次结构模型在许多视频相关的任务上取得的成绩远好于手工特征,基于SIFT特征的模型的风头似乎被CNN盖过了。基于CNN的检索模型通常计算出紧密的图像表示向量,并使用欧氏距离或ANN(approximate nearest neighbor)查找算法进行检索。最近的文献可能会直接使用预训练好的CNN模型或微调后应用于特定的检索任务。这些方法大多只将图像输入到网络中一次就可以获取描述符。一些基于图像块的方法则是将图像多次输入到网络中,这和SIFT方法的习惯有些类似;在这篇综述中,我们将这些方法称为混合型方法。
Organization of This Paper 本文结构
Upon the time of change, this paper provides a comprehensive literature survey of both the SIFT-based and CNN-based instance retrieval methods. We first present the categorization methodology in Section 2. We then describe the two major method types in Section 3 and Section 4, respectively. On several benchmark datasets, Section 5 summarizes the comparisons between SIFT- and CNN-based methods. In Section 6, we point out two possible future directions. This survey will be concluded in Section 7.
随着时间的推移,本文对基于SIFT的实例检索方法和基于CNN的实例检索方法进行了全面的文献综述。我们首先在第2节中介绍分类方法。然后,我们分别在第3节和第4节中描述两种主要的方法类型。在一些基准数据集上,第5节总结了基于SIFT和CNN的方法之间的比较。在第6节中,进一步地,我们将基于SIFT的方法根据编码本大小又分为大,中,小编码本三类。我们注意到,编码本的大小与所选取的编码方法紧密相关。基于CNN的方法分为使用预训练的模型,微调的模型以及混合模型方法。他们的异同点列于表1。我们指出了两个可能的未来方向。该调查将在第7节中结束。
Categorization Methodology 分类方法
TABLE I: Major differences between various types of instance retrieval models. For SIFT-based methods, hand-crafted local invariant features are extracted, and according to the codebook sizes, different encoding and indexing strategies are leveraged. For CNN-based methods, pre-trained, fine-tuned CNN models and hybrid methods are the primary types; fixed-length compact vectors are usually produced, combined with approximate nearest neighbor (ANN) methods.
表I:各种类型的实例检索模型之间的主要区别。对于基于SIFT的方法,将提取手工制作的局部不变特征,并根据码本的大小,利用不同的编码和索引策略。对于基于CNN的方法,预训练的,经过微调的CNN模型和混合方法是主要类型。通常生成固定长度的紧凑向量,并结合近似最近邻(ANN)方法。
According to the different visual representations, this survey categorizes the retrieval literature into two broad types: SIFT-based and CNN-based. The SIFT-based methods are further organized into three classes: using large, medium-sized or small codebooks. We note that the codebook size is closely related to the choice of encoding methods. The CNN-based methods are categorized into using pre-trained or fine-tuned CNN models, as well as hybrid methods. Their similarities and differences are summarized in Table I.
The SIFT-based methods had been predominantly studied before 2012 [6] (good works also appear in recent years [18, 19]). This line of methods usually use one type of detector, e.g., Hessian-Affine, and one type of descriptor, e.g., SIFT. Encoding maps a local feature into a vector. Based on the size of the codebook used during encoding, we classify SIFT-based methods into three categories as below.
- [leftmargin=0pt]
- Using small codebooks. The visual words are fewer than several thousand. Compact vectors are generated [15, 14] before dimension reduction and coding.
- Using medium-sized codebooks. Given the sparsity of BoW and the low discriminative ability of visual words, the inverted index and binary signatures are used [13]. The trade-off between accuracy and efficiency is a major influencing factor [20].
- Using large codebooks. Given the sparse BoW histograms and the high discriminative ability of visual words, the inverted index and memory-friendly signatures are used [21]. Approximate methods are used in codebook generation and encoding [11, 12].
基于SIFT的方法在2012年之前一直是研究的重点(当然近年来也有不少相关的杰出工作)。这一类方法通常使用如Hessian-Affine这种探测器,同时也使用SIFT这种描述符。编码本将局部特征映射到一组向量中。基于编码本大小,我们将基于SIFT的方法分为如下三类。
- 使用小型编码本。视觉词汇少于几千个,紧凑编码向量在降维和编码之前生成。
- 使用中型编码本。鉴于BoW的稀疏性和视觉词汇的低区分度,使用倒排索引和二进制签名方法。准确率和效率间的权衡是影响算法的主要因素。
- 使用大型编码本。鉴于BoW直方图的稀疏性和视觉词汇的高区分度,在算法中使用了倒排索引和存储友好型的签名方式。在编码本的生成和编码中使用了类似的方法。
The CNN-based methods extract features using CNN models. Compact (fixed-length) representations are usually built. There are three classes:
-
Hybrid methods. Image patches are fed into CNN multiple times for feature extraction [7]. Encoding and indexing are similar to SIFT-based methods [22].
-
Using pre-trained CNN models. Features are extracted in a single pass using CNN pre-trained on some large-scale datasets like ImageNet [23]. Compact Encoding/pooling techniques are used [9, 10].
-
Using fine-tuned CNN models. The CNN model (e.g., pre-trained on ImageNet) is fine-tuned on a training set in which the images share similar distributions with the target database [8]. CNN features can be extracted in an end-to-end manner through a single pass to the CNN model. The visual representations exhibit improved discriminative ability [24, 17].
基于CNN的方法使用CNN模型提取特征,建立紧凑向量(固定长度)。它们也分为三类:
-
混合型方法。图像块被多次输入进CNN用于特征提取。编码与索引方法和基于SIFT的检索方法近似。
-
使用预训练的模型。通过在大规模图像集(例如ImageNet)上预训练的CNN模型进行单通道传播提取特征。使用紧凑编码/池化技术进行检索。
-
使用微调的模型。在训练图像与目标数据库具有相似的分布的训练集上,对CNN模型进行微调。通过单通道CNN模型,运用端到端的方法提取出CNN特征。这种视觉表示方法提升了模型的区分能力。
SIFT-based Image Retrieval 基于SIFT特征的图像检索系统
Pipeline
The pipeline of SIFT-based retrieval is introduced in Fig. pipeline. Suppose we have a gallery $ \mathcal{G} $ consisting of $ N $ images. Given a feature detector, we extract local descriptors from the regions around the sparse interest points or dense patches. We denote the local descriptors of $ D $ detected regions in an image as $ {f_i}{i = i}^D, f_i\in\mathbb{R}^p $ . SIFT-based methods train a codebook offline. Each visual word in the codebook lies in the center of a subspace, called the Voronoi cell. A larger codebook corresponds to a finer partitioning, resulting in more discriminative visual words and vice versa. Suppose that a pool of local descriptors $ \mathcal{F}\equiv{f_i}{i=1}^M $ are computed from an unlabeled training set. The baseline approach, i.e., k-means, partitions the $ M $ points into $ K $ clusters; the $ K $ visual words thus constitute a codebook of size $ K $ . A local descriptor $ f_i \in\mathbb{R}^p $ is mapped into a feature embedding $ g_i \in\mathbb{R}^l $ through the feature encoding process, $ f_i \rightarrow g_i $ . When k-means clustering is used, $ f_i $ can be encoded according to its distances to the visual words. For large codebooks, hard and soft quantization are good choices. In the former, the resulting embedding $ g_i $ has only one non-zero entry; in the latter, $ f_i $ can be quantized to a small number of visual words. A global signature is produced after a sum-pooling of all the embeddings of local features. For medium-sized codebooks, additional binary signatures can be generated to preserve the original information. When using small codebooks, popular encoding schemes include vector of locally aggregated descriptors (VLAD) , Fisher vector (FV) , etc.
基于SIFT特征与CNN特征的图像检索流程如图2所示。假设我们有一个由$ N $图像组成的图库$\mathcal{G} $。给定特征检测器,我们从稀疏兴趣点或密集斑块周围的区域中提取本地描述符。我们表示$ D $检测到的本地描述符作为$ {f_i}^D_{i = i}, f_i\in\mathbb{R}^p $。表示图像中被检测的区域。
编码本的训练。基于SIFT的方法你先训练编码本。编码本中的每一个视觉词汇位于子空间的中心,这称为“沃罗诺伊voronoi单元”。更大的码本对应于更精细的划分,从而产生更多辨别性的视觉词,反之亦然。假设从未标记的训练集计算了一款本地描述符$F = {f_i}_{i=1}^M $。基线方法,即, k均值,将$ M $分区为$ K $集群;因此,$ K $视觉单词构成大小$ K $的码本。本地描述符$ f_i \in\mathbb{R}^p $被映射到嵌入特征$ g_i \in\mathbb{R}^l $通过特征编码过程,$ f_i \rightarrow g_i $。当使用K-Means群集时,可以根据其距离对视觉单词进行编码$ f_i $。对于大规模的编码本,硬量化和软量化方法都是很好的选择。前者量化得到只有一个非零条目的嵌入特征$ g_i $;在后者中,$ f_i $可以量化为少量视觉单词。在总结本地特征的嵌入式的总和汇总后生成全局签名。对于中型码本,可以生成其他二进制签名以保留原始信息。使用小型码本时,流行的编码方案包括当地聚合描述符(VLAD),Fisher Vector(FV),* ETC *的向量。
Fig. 2: A general pipeline of SIFT- and CNN-based retrieval models. Features are computed from hand-crafted detectors for SIFT, and densely applied filters or image patches for CNN. In both methods, under small codebooks, encoding/pooling is employed to produce compact vectors. In SIFT-based methods, the inverted index is necessary under large/medium-sized codebooks. The CNN features can also be computed in an end-to-end way using fine-tuned CNN models.
图2:基于SIFT和CNN的检索模型的一般流水线。特征是通过手工制作的SIFT检测器和密集应用的滤镜或CNN图像补丁计算得出的。在这两种方法中,在小型码本下,都采用编码/合并来生成紧凑向量。在基于SIFT的方法中,在大/中型码本下,倒排索引是必需的。也可以使用微调的CNN模型以端到端的方式计算CNN功能。
Local Feature Extraction 局部特征提取
Local invariant features aim at accurate matching of local structures between images [26]. SIFT-based methods usually share a similar feature extraction step composed of a feature detector and a descriptor.
Local detector. The interest point detectors aim to reliably localize a set of stable local regions under various imaging conditions. In the retrieval community, finding affine-covariant regions has been preferred. It is called “covariant” because the shapes of the detected regions change with the affine transformations, so that the region content (descriptors) can be invariant. This kind of detectors are different from keypoint-centric detectors such as the Hessian detector [27], and from those focusing on scale-invariant regions such as the difference of Gaussians (DoG) [5] detector. Elliptical regions which are adapted to the local intensity patterns are produced by affine detectors. This ensures that the same local structure is covered under deformations caused by viewpoint variances, a problem often encountered in instance retrieval. In the milestone work [3], the Maximally Stable Extremal Region (MSER) detector [28] and the affine extended Harris-Laplace detector are employed, both of which are affine-invariant region detectors. MSER is used in several later works [29, 11]. Starting from [12], the Hessian-affine detector [30] has been widely adopted in retrieval. It has been shown to be superior to the difference of Gaussians (DoG) detector [13, 31], due to its advantage in reliably detecting local structures under large viewpoint changes. To fix the orientation ambiguity of these affine-covariant regions, the gravity assumption is made [32]. The practice which dismisses the orientation estimation is employed by later works [33, 34] and demonstrates consistent improvement on architecture datasets where the objects are usually upright. Other non-affine detectors have also been tested in retrieval, such as the Laplacian of Gaussian (LOG) and Harris detectors used in [35]. For objects with smooth surfaces [36], few interest points can be detected, so the object boundaries are good candidates for local description.
On the other hand, some employ the dense region detectors. In the comparison between densely sampled image patches and the detected patches, Sicre et al. [37] report the superiority of the former. To recover the rotation invariance of dense sampling, the dominant angle of patches is estimated in [38]. A comprehensive comparison of various dense sampling strategies, the interest point detectors, and those in between can be accessed in [39].
Local Descriptor. With a set of detected regions, descriptors encode the local content. SIFT [5] has been used as the default descriptor. The 128-dim vector has been shown to outperform competing descriptors in matching accuracy [40]. In an extension, PCA-SIFT [41] reduces the dimension from 128 to 36 to speed up the matching process at the cost of more time in feature computation and loss of distinctiveness. Another improvement is RootSIFT [33], calculated by two steps: 1) ℓ1 normalize the SIFT descriptor, 2) square root each element. RootSIFT is now used as a routine in SIFT-based retrieval. Apart from SIFT, SURF [42] is also widely used. It combines the Hessian-Laplace detector and a local descriptor of the local gradient histograms. The integral image is used for acceleration. SURF has a comparable matching accuracy with SIFT and is faster to compute. See [43] for comparisons between SIFT, PCA-SIFT, and SURF. To further accelerate the matching speed, binary descriptors [44] replace Euclidean distance with Hamming distance during matching.
Apart from hand-crafted descriptors, some also propose learning schemes to improve the discriminative ability of local descriptors. For example, Philbin et al. [45] proposes a non-linear transformation so that the projected SIFT descriptor yields smaller distances for true matches. Simoyan et al. [34] improve this process by learning both the pooling region and a linear descriptor projection.
局部不变特征针对精准匹配图像局部结构而提出。基于SIFT的方法和大多特征提取步骤类似,都是由特征检测器和描述符组成。
局部检测器。感兴趣点检测器旨在于在多样的图像场景中定位出一系列特征稳定的局部区域。在检索社区中,寻找图像的仿射协变区域(affine-covariant regions)一直是首选方法。它之所以称之为“协变的”是因为检测区域随着仿射变化而改变,因此区域描述符具有不变性。这种仿射协变区域检测器和以关键点为中心的海森检测器(Hessian detector)不同,当然也和以寻找尺度不变区域为目标的高斯差分检测器(DoG detector)。适应于局部强度模式的椭圆区域由仿射检测器探测到。这就确保了相同的图像局部结构即使是因为视角变化而产生形变时也能被检测到,视角形变问题也是图像检索任务中的常见问题。在里程碑事件中,我们也提到了最稳定连通域(MSER)检测器和仿射拓展的海尔-拉普拉斯检测器( affine extended Harris-Laplace detector)这两种具有仿射不变性的区域检测器。鉴于海森仿射检测器在解决视角变化问题中的优异性能,社区认为它是要优于DoG检测器的。为了解决这些仿射协变区域方向模糊的问题,重力假设方法应运而生。这种方法抛弃了方向估计的思路,并在建筑物数据集上的效果不断改善。在图像检索中也尝试了其他的非仿射检测器,例如拉普拉斯-高斯(LOG)和海尔检测器。对于表明光滑的物体,仅有少量的关键点会产生响应,因此可以用物体边缘作为局部特征描述。
另一方面,针对密集区域检测器也有不少研究。在对比了密集采样图像块和探测图像块两种方法后,Sicre等指出前者表现更优。为了恢复密集采样图像块的旋转不变性,提出了图像块主旋转角方法。各种密集采样策略以及关键点检测器的综合比较可以在《A comparison of dense region detectors for image search and fine-grained classification》这篇文献中查阅到。
局部描述符。局部描述符使用一系列检测区域对局部图像内容进行编码。SIFT描述符一直以来都是大家默认使用的描述符。这种128维的向量在匹配准确率上从众多描述符中脱颖而出。更进一步地,PCA-SIFT描述符将特征向量的维度从128维减少到36维,通过增加特征建立计算量和降低区分度来加快匹配速度。另一种改进方法是RootSIFT,它首先将SIFT描述符进行 $ \ell_1 $归一化,随后对每个元素开方。RootSIFT现在是基于SIFT的检索方法惯用方法。除了SIFT之外,SURF描述符也被广泛应用。SURF描述符结合了海森-拉普拉斯检测器和局部梯度直方图。积分图技巧可以用于加速特征的计算(译者注:积分图技巧会对原图像首先生成一张积分图,这种通过空间换取时间的策略,在计算图像的诸如海尔特征时可以大幅提高计算速度)。SURF可以取得和SIFT近乎一样的准确率,同时SURF计算速度更快。文献《A comparison of sift, pca-sift and surf》对SIFT,PCA-SIFT和SURFF进行了比较。为了进一步加快匹配速度,二值描述符用汉明距离替代了欧氏距离。
除了手工特征,一些研究人员也提出了学习式的方案来提高局部描述符特征的区分度。例如,Philbin等提出了一种非线性的变换使得投影SIFT描述符为真实匹配产生更小的差异。Simoyan等更进一步地设计了学习池化区域和线性描述符投影的方案来改进Philbin等的方案。
Retrieval Using Small Codebooks 使用小规模编码本进行检索
A small codebook has several thousand, several hundred or fewer visual words, so the computational complexity of codebook generation and encoding is moderate. Representative works include BoW , VLAD and FV . We mainly discuss VLAD and FV and refer readers to for a comprehensive evaluation of the BoW compact vectors.
小规模编码本一般包含几千、几百甚至更少的视觉词汇,因此编码本生成以及编码算法的时间复杂度不高。这方面一些有代表性的工作包括BoW、VLAD和FV。我们主要讨论了VLAD和FV模型,同时根据《Multiple measurements and joint dimensionality reduction for large scale image search with short vectors》 这篇文献综合评价了BoW压缩向量。
Codebook Generation 生成编码本
Clustering complexity depends heavily on the codebook size. In works based on VLAD or FV , the codebook sizes are typically small, e.g., 64, 128, 256. For VLAD, flat k-means is employed for codebook generation. For FV, the Gaussian mixture model (GMM), i.e., $ u_{\lambda}(x) = \sum_{i=1}^K w_i u_i(x) $ , where $ K $ is the number of Gaussian mixtures, is trained using the maximum likelihood estimation. GMM describes the feature space with a mixture of $ K $ Gaussian distributions, and can be denoted as $ \lambda = {w_i, \mu_i, \sum_i, i = 1,...,K} $ , where $ w_i $ , $ \mu_i $ and $ \sum_i $ represent the mixture weight, the mean vector and the covariance matrix of Gaussian $ u_i $ , respectively.
聚类时的算法复杂度很大程度上依赖于编码本的大小。用VLAD或FV生成打的编码本通常很小,一般是64,128,256。在VLAD中使用平面k-means聚类算法生成编码本。在FV中使用GMM算法,即, $ u_{\lambda}(x) = \sum_{i=1}^K w_i u_i(x) $,其中$ K $是使用最大似然估计训练的高斯混合的数量。 GMM描述了具有$ K $高斯分布的混合的特征空间,并且可以被表示为$\lambda = {w_i, \mu_i, \sum_i, i = 1,...,K} $,其中$ w_i $,$\mu_i $和$\sum_i $别表示混合权重,均值向量和高斯曲线$ u_i $的协方差矩阵。
Encoding 编码方法
Due to the small codebook size, relative complex and information-preserving encoding techniques can be applied. We mainly describe FV, VLAD and their improvements in this section. With a pre-trained GMM model, FV describes the averaged first and second order difference between local features and the GMM centers. Its dimension is $ 2pK $ , where $ p $ is the dimension of the local descriptors and $ K $ is the codebook size of GMM. FV usually undergoes power normalization , to suppress the burstiness problem (to be described in Section tf-idf). In this step, each component of FV undergoes non-linear transformation featured by parameter $ \alpha $ , $ x_i := {sign}(x_i)|x_i|^\alpha $ . Then $ \ell_2 $ normalization is employed. Later, FV is improved from different aspects. For example, Koniusz et al. augment each descriptor with its spatial coordinates and associated tunable weights. In , larger codebooks (up to 4,096) are generated and demonstrate superior classification accuracy to smaller codebooks, at the cost of computational efficiency. To correct the assumption that local regions are identically and independently distributed (iid), Cinbis et al. propose non-iid models that discount the burstiness effect and yield improvement over the power normalization. The VLAD encoding scheme proposed by J'gou et al. can be thought of as a simplified version of FV. It quantizes a local feature to its nearest visual word in the codebook and records the difference between them. Nearest neighbor search is performed because of the small codebook size. The residual vectors are then aggregated by sum pooling followed by normalizations. The dimension of VLAD is $ pK $ . Comparisons of some important encoding techniques are presented in . Again, the improvement of VLAD comes from multiple aspects. In , J'gou and Chum suggest the usage of PCA and whitening (denoted as PCA $ _w $ in Table compare_methods_datasets) to de-correlate visual word co-occurrences, and the training of multiple codebooks to reduce quantization loss. In , Arandjeloviet al. extend VLAD in three aspects: 1) normalize the residual sum within each coarse cluster, called intra-normalization, 2) vocabulary adaptation to address the dataset transfer problem and 3) multi-VLAD for small object discovery.
因为小规模编码本尺寸小的缘故,相对复杂的和存储信息的方法可以在这上面使用。我们在这个小节中主要调研了FV,VLAD方法及其发展。使用预先训练的GMM模型,FV描述局部特征和GMM中心之间的平均一阶和二阶差异。它的维度是$ 2pK $,其中$ p $是本地描述符的尺寸,$ K $是GMM编码本的长度。FV通常进行指数归一化(power normalization)以抑制突发性问题。在这一步,FV的每一部分在的非线性变换由参数$\alpha$,$ x_i := |x_i|^\alpha$表征。然后采用$\ell_2 $归一化。归一化,这样FV就从各方面得到提升。例如,Koniusz等人用每个描述符的空间坐标和相关的可调权重来对其进行扩充。在《Revisiting the fisher vector for fine-grained classification》这篇文献中,更大的编码本(将近4096)表现出比小编码本更好的分类准确率,当然计算花费同时也增大了。为了修正局部区域独立同分布这一假设,Cinbis等人提出了非独立同分布模型,这个工作抑制了突发事件的影响,同时也改进了指数归一化的效果。
VLAD编码方案由Jégou提出,可以认为VLAD是FV的简化版本。VLAD量化将局部特征量化为最近邻视觉词汇,同时记录下两者的距离。由于编码本规模小,因此最近邻检索方案可行。在残差向量被总和池化聚合后进行归一化。VLAD的维度是$ pK $。同样,研究人员对VLAD在多方面进行了改进。Jégou和Chum提出使用PCA和白化(在表5中表示为PCA $_w $))去消除视觉词语共现现象,并且训练多个编码本以减少量化带来的损失。Arandjelovi等从三个方面拓展了VLAD:1)归一化每个粗糙聚类中的残差和,称为内部归一化,2)通过词汇自适应来解决数据集迁移问题,3)用多VLAD模型(multi-VLAD)来解决小目标发掘问题。
Concurrent to , Delhumeau et al. propose to normalize each residual vector instead of the residual sums; they also advocate for local PCA within each Voronoi cell which does not perform dimension reduction as . A recent work employs soft assignment and empirically learns optimal weights for each rank to improve over the hard quantization. Note that some general techniques benefit various embedding methods, such as VLAD, FV, BoW, locality-constrained linear coding (LLC) and monomial embeddings. To improve the discriminative ability of embeddings, Tolias et al. propose the orientation covariant embedding to encode the dominant orientation of the SIFT regions jointly with the SIFT descriptor. It achieves a similar covariance property to weak geometric consistency (WGC) by using geometric cues within regions of interest so that matching points with similar dominant orientations are up-weighted and vice versa. The triangulation embedding only considers the direction instead of the magnitude of the input vectors. J'gou et al. also present a democratic aggregation that limits the interference between the mapped vectors. Baring a similar idea with democratic aggregation, Murray and Perronnin propose the generalized max pooling (GMP) optimized by equalizing the similarity between the pooled vector and each coding representation. The computational complexity of BoW, VLAD and FV is similar. We neglect the offline training and SIFT extraction steps. During visual word assignment, each feature should compute its distance (or soft assignment coefficient) with all the visual words (or Gaussians) for VLAD (or FV). So this step has a complexity of $ \mathcal{O}(pK) $ . In the other steps, complexity does not exceed $ \mathcal{O}(pK) $ . Considering the sum-pooling of the embeddings, the encoding process has an overall complexity of $ \mathcal{O}(pKD) $ , where $ D $ is the number of features in an image. Triangulation embedding , a variant of VLAD, has a similar complexity. The complexity of multi-VLAD is $ \mathcal{O}(pKD) $ , too, but it has a more costly matching process. Hierarchical VLAD has a complexity of $ \mathcal{O}(pKK'D) $ , where $ K' $ is the size of the secondary codebook. In the aggregation stage, both GMP and democratic aggregation have high complexity. The complexity of GMP is $ \mathcal{O}(\frac{P^2}{K}) $ , where $ P $ is the dimension of the feature embedding, while the computational cost of democratic aggregation comes from the Sinkhorn algorithm.
Delhumeau等提出应该将残差向量归一化,而不是求残差和;他们还提倡在每个Voronoi格内进行PCA降维,而不是像《Aggregating local image descriptors into compact codes》中所提出的降维方案。《Improving large-scale image retrieval through robust aggregation of local descriptors》中提出应该使用软任务和经验性地为每个等级学习最佳权重来改进硬量化方案。
注意到许多常规的方法对VLAD,FV,BoW,局部约束线性编码(LLC)以及单项嵌入这些嵌入方法有益。Tolias等人提出结合SIFT描述符,用方向协变嵌入的方法来对SIFT特征主方向进行编码。它通过在感兴趣的区域内使用几何线索来实现与弱几何一致性(WGC)相似的协方差属性,使得与主方向相似的匹配点被加权,反之亦然。三角嵌入方法只考虑了输入向量的方向而没有考虑其大小。Jégou等人同样也提出了一种限制映射向量之间干扰的民主聚合的方法。除了类似民主聚合的思想,Murray和Perronnin提出,通过均衡池化向量和每个编码表示之间的相似性优化广义最大池化(GMP)方法。
BoW,VLAD和FV的复杂度基本一致。我们忽视线下训练时间和SIFT特征提取时间。在视觉词汇分配这一步中,VLAD(FV)模型中每一个特征(高斯曲线)需要计算和每一个视觉词汇的距离(soft assignment coefficient),因此这一步的计算复杂度为$\mathcal{O}(pK) $。在其他步骤中,复杂性不超过$\mathcal{O}(pK) $。考虑到嵌入式的总和,编码过程具有$\mathcal{O}(pKD) $的总体复杂性,其中$ D $表示图像中提取的特征的数目。三角嵌入式VLAD的一个变异体,它和VLAD的计算复杂度相似,都是 $\mathcal{O}(pKD) $,但它具有更昂贵的匹配过程。分层VLAD具有$\mathcal{O}(pKK'D) $的复杂性,其中$ K' $是辅助码本的大小。在聚合阶段,GMP和民主党两者都具有很高的复杂性。 GMP的复杂性是$\mathcal{O}(\frac{P^2}{K}) $,其中$ P $是特征嵌入的维度, 民主聚类方法的计算复杂度主要来源于Sinkhorn算法。
ANN Search 最似最近邻检索
Due to the high dimensionality of the VLAD/FV embeddings, efficient compression and ANN search methods have been employed [61, 62]. For example, the principle component analysis (PCA) is usually adapted to for dimension reduction, and it is shown that retrieval accuracy even increases after PCA [53]. For hashing-based ANN methods, Perronnin et al. [47] use standard binary encoding techniques such as locality sensitive hashing [63] and spectral hashing [64]. Nevertheless, when being tested on the SIFT and GIST feature datasets, spectral hashing is shown to be outperformed by Product Quantization (PQ) [61]. In these quantization-based ANN methods, PQ is demonstrated to be better than other popular ANN methods such as FLANN [62] as well. A detailed discussion of VLAD and PQ can be viewed in [65]. PQ has since then been improved in a number of works. In [66], Douze et al. propose to re-order the cluster centroids so that adjacent centroids have small Hamming distances. This method is compatible with Hamming distance based ANN search, which offers significant speedup for PQ. We refer readers to [67] for a survey of ANN approaches.
由于VLAD/FV嵌入的维度相当高,因此研究人员提出了高效的压缩和最似最近邻检索(ANN)算法。例如,主成分分析(PCA)算法常适用于降维任务,特别是使用PCA降维后检索的准确度甚至会提高。对于基于哈希的最似最近邻方法,Perronnin等人使用标准二值编码方法,如局部敏感哈希(LSH)和谱哈希(spectral hashing)。然而,在使用SIFT和GIST特征数据库进行测试时,谱哈希方法被证明要优于乘积量化方法。在这些基于量化的最似最近邻算法中,PQ算法表现得最为出色。关于VLAD和PQ算法的详细研究可以参见《A comprehensive study over vlad and product quantization in large-scale image retrieval》。同样,PQ算法后来也被不断改进。Douze等人提出对聚类中心重新排序,使得相邻的中心具有较小的汉明距离。该方法与基于汉明距离的最似最近邻检索相兼容,这为PQ算法提供了显著的加速。我们参阅了《A survey on learning to hash》作为ANN方法的调研报告基础。
We also mention an emerging ANN technique, i.e., group testing [68], [69], [70]. In a nutshell, the database is decomposed into groups, each represented by a group vector. Comparisons between the query and group vectors reveal how likely a group contains a true match. Since group vectors are much fewer than the database vectors, search time is reduced. Iscen et al. [69] propose to directly find the best group vectors summarizing the database without explicitly forming the groups, which reduces the memory consumption.
我们还提到一种新兴的ANN算法,群组测试算法。简要地说,该算法将数据库分组,每组都由一个组向量表示。通过查询和组向量之间的比较计算出一个组包含正确匹配的可能性。因为组向量数目远少于数据库向量,因此检索时间大大缩短。Iscen等提出直接找出数据库中最优组向量,而不用精确地分组,这个方案减少了内存的消耗。
Fig. 3: Two milestone clustering methods (a) hierarchical k-means (HKM) [11] and (b) approximate k-means (AKM) [12] for large codebook generation. Bold borders and blue discs are the clustering boundaries and centers of the first layer of HKM. Slim borders and red squares are the final clustering results in both methods.
图3:在大规模编码本生成中的两个里程碑似的聚类算法 (a) 分层k-means(HKM) (b) 近似k-means(AKM)
Retrieval Using Large Codebooks 使用大规模编码本进行检索
A large codebook may contain 1 million visual words or more . Some major steps undergo important changes compared with using small codebooks.
一个大规模编码本可能含有1百万个甚至更多的视觉词汇。其中一些步骤和小编码本方案比起来有很大的差异。
Codebook Generation 生成编码本
Approximate methods are critical in assigning data into a large number of clusters. In the retrieval community, two representative works are hierarchical k-means (HKM) and approximate k-means (AKM) , as illustrated in Fig. timeline and Fig. clustering. Proposed in 2006, HKM applies standard k-means on the training features hierarchically. It first partitions the points into a few clusters (e.g., $ \bar{k}\ll K $ ) and then recursively partitions each cluster into further clusters. In every recursion, each point should be assigned to one of the $ \bar{k} $ clusters, with the depth of the cluster tree being $ \mathcal{O}(\log K) $ , where $ K $ is the target cluster number. The computational cost of HKM is therefore $ \mathcal{O}(\bar{k}M\log K) $ , where $ M $ is the number of training samples. It is much smaller than the complexity of flat k-means $ \mathcal{O}(MK) $ when $ K $ is large (a large codebook). The other milestone in large codebook generation is AKM . This method indexes the $ K $ cluster centers using a forest of random $ k $ -d trees so that the assignment step can be performed efficiently with ANN search. In AKM, the cost of assignment can be written as $ \mathcal{O}(K\log K + vM\log K) = \mathcal{O}(vM\log K) $ , where $ v $ is the number of nearest cluster candidates to be accessed in the $ k $ -d trees. So the computational complexity of AKM is on par with HKM and is significantly smaller than flat k-means when $ K $ is large. Experiments show that AKM is superior to HKM due to its lower quantization error (see Section quantization_large). In most AKM-based methods, the default choice for ANN search is FLANN .
在将数据分配到大量集群中,近似方法是至关重要的。在检索社区中,两个有代表性的工作是:分层k-means(HKM)和近似k-means(AKM),如图1和图3所示。HKM方法在2006年提出,HKM分层次地应用标准k-means方法进行特征训练。HKM首先将特征空间中的点划分为几个簇,接着递归地将每个簇划分为更多的群集。在每次递归时,每个点都要被归类为$\bar{k} $集群之一,群集树的深度为$\mathcal{O}(\log K) $,其中$ K $是目标群集编号。因此,HKM的计算成本是$\mathcal{O}(\bar{k}M\log K) $,其中$ M $是训练样本的数量。当$ K $大(一个大码本)时,它远小于扁平k-means(flat k-means) $\mathcal{O}(MK) $的复杂性。
另一个里程碑式的大规模编码本生成算法是AKM。该方法利用随机K-D树对K聚类中心进行索引,使得在分配步骤中能够高效地使用ANN搜索。在AKM中,分配的成本可以写入$\mathcal{O}(K\log K) = \mathcal{O} K) = \log(vM\log K) $,其中$ v $是在$ k $ -D1 -D树中访问的最近群集候选的数量。因此,AKM的计算复杂性与HKM为单位,并且当$ K $大时明显小于扁平K-Means。实验表明,由于其较低的量化误差,AKM优于HKM(参见“尺寸化截式”部分)。在基于大多数基于Akm的方法中,ANN搜索的默认选择是FLANN。
Feature Encoding (Quantization) 特征编码(量化)
Feature encoding is interleaved with codebook clustering, because ANN search is critical in both components. The ANN techniques implied in some classic methods like AKM and HKM can be used in both clustering and encoding steps. Under a large codebook, the key trade-off is between quantization error and computational complexity. In the encoding step, information-preserving encoding methods such as FV [14], sparse coding [73] are mostly infeasible due to their computational complexity. It therefore remains a challenging problem how to reduce the quantization error while keeping the quantization process efficient.
Fro the ANN methods, the earliest solution is to quantize a local feature along the hierarchical tree structure [11]. Quantized tree nodes in different levels are assigned different weights. However, due to the highly imbalanced tree structure, this method is outperformed by k-d tree based quantization method [12]: one visual word is assigned to each local feature, using a k-d tree built from the codebook for fast ANN search. In an improvement to this hard quantization scheme, Philbin et al. [25] propose soft quantization by quantizing a feature into several nearest visual words. The weight of each assigned visual word relates negatively to its distance from the feature by $ \exp(-\frac{d^2}{2\sigma^2}) $ , where $ d $ is the distance between the descriptor and the cluster center. While soft quantization is based on the Euclidean distance, Mikulik et al. propose to find relevant visual words for each visual word through an unsupervised set of matching features. Built on a probabilistic model, these alternative words tend to contain descriptors of matching features. To reduce the memory cost of soft quantization and the number of query visual words, Cai et al. suggest that when a local feature is far away from even the nearest visual word, this feature can be discarded without a performance drop. To further accelerate quantization, scalar quantization suggests that local features be quantized without an explicitly trained codebook. A floating-point vector is binarized, and the first dimensions of the resulting binary vector are directly converted to a decimal number as a visual word. In the case of large quantization error and low recall, scalar quantization uses bit-flop to generate hundreds of visual words for a local feature.
特征编码与编码本聚类是相互交错的,ANN检索对两者都至关重要。在AKM和HKM等一些经典方法中应用的ANN技术可用于聚类和编码中。在大规模编码本中,关键是要平衡量化误差和计算复杂度两者。在编码步骤中,诸如FV,稀疏编码的信息存留式编码方法大都不可行,因为它们的计算复杂度过高。因此,如何在保证量化效率的同时减少量化误差仍是一个极具挑战的问题。
对ANN方法来说,最早的解决方案是沿着分层树结构量化局部特征。不同级别的量化树节点被赋予不同的权重。然而,由于高度不平衡的树结构,该方法优于基于k-d树的量化方法:一个视觉词被分配给每个局部特征,使用从码书构建的k-d树来进行快速ANN搜索。对这种硬量化方案的一种改进是Philbin等人提出的软量化方案,这种方案将一个特征量化为几个最近的视觉词汇。由式$\exp(-\frac{d^2}{2\sigma^2}) $指出,每个指定的视觉单词的权重与它到特征的距离呈负相关,其中d是描述符和聚类中心之间的距离。虽然软量化是基于欧几里得距离,但Mikulik等人提出通过无监督的匹配特征集为每个视觉单词找到相关的视觉单词。基于概率模型,这些备选词往往包含匹配特征的描述符。为了减少软量化的存储成本和查询视觉词汇的数量,Cai等提出当局部特征离最近的视觉词汇距离很远时,该特征可以被丢弃而且不会带来性能的下降。为了进一步加速量化,标量量化提出局部特征在没有明确训练的编码本的情况下被量化。浮点向量是二值化的,并且所得的二进制向量的第一维直接转换为十进制数作为视觉词汇。在高量化误差和低召回率的情况下,标量量化使用位翻转(bit-flop)来为局部特征生成数百个视觉词汇。
Feature WeightingTF-IDF. TF-IDF特征加权
The visual words in codebook $ \mathcal{C} $ are typically assigned specific weights, called the term frequency and inverse document frequency (TF-IDF), which are integrated with the BoW encoding. TF is defined as:
码本$\mathcal{C}$中往往被分配给指定的权重。称为频率与逆文档频率(TF-IDF),这种策略被集成在BoW编码中。TF定义如下:
$$ {TF}(c_i^j) = o_i^j, $$
where $ o_i^j $ is the number of occurrences of a visual word $ c_i $ within an image $ j $ . TF is thus a local weight. IDF, on the other hand, determines the contribution of a given visual word through global statistics. The classic IDF weight of visual word $ c_i $ is calculated as:
,其中$ o_i^j $是图像$ j $内的视觉字$ c_i $的出现次数。因此,TF是局部重量。另一方面,IDF通过全局统计来确定给定的视觉单词的贡献。计算出视觉字$ c_i $的经典IDF重量为:
$$ {IDF}(c_i) = \log{\frac{N}{n_i}}, {where} n_i = \sum_{j\in \mathcal{G}}{1}(o_i^j > 0), $$
where $ N $ is the number of gallery images, and $ n_i $ encodes the number of images in which word $ c_i $ appears. The TF-IDF weight for visual word $ c_i $ in image $ j $ is:
,其中$ N $是图库图像的数量,$ n_i $编码一个显示字$ c_i $的图像的数量。图像$ j $中的可视字$ c_i $的tf-idf重量是:
$$ w(c_i^j) = {TF}(c_i^j){IDF}(c_i). $$
A major problem associated with visual word weighting is burstiness . It refers to the phenomenon whereby repetitive structures appear in an image. This problem tends to dominate image similarity. Jgou et al. propose several TF variants to deal with burstiness. An effective strategy consists in exerting a square operation on TF. Instead of grouping features with the same word index, Revaud et al. propose detecting keypoint groups frequently happening in irrelevant images which are down-weighted in the scoring function. While the above two methods detect bursty groups after quantization, Shi et al propose detecting them in the descriptor stage. The detected bursty descriptors undergo average pooling and are fed in the BoW architectures. From the aspect of IDF, Zheng et al. propose the $ \mathcal{L}_p $ -norm IDF to tackle burstiness and Murata et al. design the exponential IDF which is later incorporated into the BM25 formula. When most works try to suppress burstiness, Torii et al view it as a distinguishing feature for architectures and design new similarity measurement following burstiness detection. Another feature weighting strategy is feature augmentation on the database side , . Both methods construct an image graph offline, with edges indicating whether two images share a same object. For , only features that pass the geometric verification are preserved, which reduces the memory cost. Then, the feature of the base image is augmented with all the visual words of its connecting images. This method is improved in by only adding those visual words which are estimated to be visible in the augmented image, so that noisy visual words can be excluded.
改进方案与视觉单词加权相关的一个主要问题是突变性。它指的是图像中出现重复结构的现象。这个问题往往在图像相似度中占主要位置。Jégou等人提出了几个TF的变种来解决突变问题。一个有效的策略是在TF上进行平方运算。Revaud等人提出了检测在不相关图像中频繁出现的关键点组来降低评分函数的计算值,而不是用相同的单词索引来分组特征。尽管上述两种方法都提出在量化后检测突变组,Shi等人提出在描述符阶段检测它们。检测到的突变描述符经过平均池化并且送往BoW结构中。在改进IDF方面,Zheng等人提出了$\mathcal{L}_p $ -NORM IDFIDF方法来处理突变情况,同时Murata等人设计了后来被并入到BM25公式的指数IDF。在大多数方案都以抑制突变性为目的时,Torii等人将这个问题视为体系结构的一个显着特征,并在突发性检测后设计新的相似性度量方法。
另一个特征加权策略是数据库端的特征增强,《Better matching with fewer features: The selection of useful features in large database recognition problems》和《Three things everyone should know to improve object retrieval》在这方面进行了研究。两篇文献中的方法都离线构建图像的图结构,通过边缘指示两个图像是否共享同一对象。对第一种方案来说,只有通过几何验证的特征才会被被保留,这降低了存储成本。然后,利用其连接图像的所有视觉字来增强基础图像的特征。第二种方案进一步进行对其进行改进,通过只添加那些被认为在增强图像中可见的视觉词汇,从而干扰性的视觉词被排除。
The Inverted Index 倒排
The inverted index is designed to enable efficient storage and retrieval and is usually used under large/medium-sized codebooks. Its structure is illustrated in Fig. inv_index. The inverted index is a one-dimensional structure where each entry corresponds to a visual word in the codebook. An inverted list is attached to each word entry, and those indexed in the each inverted list are called indexed features or postings. The inverted index takes advantages of the sparse nature of the visual word histogram under a large codebook.
倒排是一种提高存储和检索效率的算法,它常被用于大/中等规模的编码本中,结构如图4所示。倒排是一种单一尺寸的结构,其中每一个条目对应编码本中低的一个视觉词汇。每一个视觉词汇都包含一个倒排表,每个倒排表中的索引被称为索引特征或者记录。倒排索引很好地发挥了大规模编码本词汇直方图稀疏性的特点。
In literature, it is required that new retrieval methods be adjustable to the inverted index. In the baseline , the image ID and term frequency (TF) are stored in a posting. When other information is integrated, they should be small in size. For example, in , the metadata are quantized, such as descriptor contextual weight, descriptor density, mean relative log scale and the mean orientation difference in each posting. Similarly, quantized spatial information such as the orientation can also be stored , . In co-indexing , when the inverted index is enlarged with globally consistent neighbors, semantically isolated images are deleted to reduce memory consumption. In , the original one-dimensional inverted index is expanded to two-dimensional for ANN search, which learns a codebook for each SIFT sub-vector. Later, it is applied to instance retrieval by to fuse local color and SIFT descriptors.
。新的文献提出新的检索方法来适应倒排算法。在基准方案中,图像ID和TF值都被存储在一条记录中。但其他的信息被整合进来时,它们的尺寸应该足够小。例如,在《Contextual weighting for vocabulary tree based image retrieval》中,原始数据在一条记录中被描述符上下文,描述符密度,平均关联日志规模和平均方向差异等属性量化。相似地,方向等空间信息也会被量化。在联合检索的方法中,当倒排索引随着全局一直近邻增长是,单独分割的图片将会被删除以减少内存消耗。在《The inverted multi-index》中提出,原始的单一尺寸倒排索引可以拓展为二维结构来进行替代了SITF特征向量的ANN检索。后来,这种方法被《Packing and padding: Coupled multi-index for accurate image retrieval》改进,融合局部颜色和SIFT描述符进行实例检索。
Retrieval Using Medium-sized Codebooks 使用中等规模编码本进行检索
Medium-sized codebooks refer to those having 10-200k visual words. The visual words exhibit medium discriminative ability, and the inverted index is usually constructed.
中等规模编码本一般含有10——200k个视觉词汇。视觉词汇展现了中等区分能力,同时检索时也使用了倒排索引。
Codebook Generation and Quantization 编码本的生成与量化
Considering the relatively small computational cost compared with large codebooks (Section codebook_large), flat k-means can be adopted for codebook generation , . It is also shown in that using AKM for clustering also yields very competitive retrieval accuracy. For quantization, nearest neighbor search can be used to find the nearest visual words in the codebook. Practice may tell that using some strict ANN algorithms produces competitive retrieval results. So comparing with the extensive study on quantization under large codebooks (Section quantization_large) , relatively fewer works focus on the quantization problem under a medium-sized codebook.
考虑到中等规模编码本和大规模编码本相比,计算成本较低,扁平k-means可以在中等规模编码本的生成中使用。同样也有文献指出使用AKM算法可以在聚类中取得很好的效果。
在量化过程中,最近邻检索用来搜索最近的视觉词汇。实践表明使用高精度的ANN算法可以得到更好的检索效果。和大规模编码本下量化算法(Quantization_Large)的研究热度比起来,中等规模编码本的研究明显低了很多。
Hamming Embedding and its improvements 汉明嵌入算法及其改进
The discriminative ability of visual words in medium-sized codebooks lies in between that of small and large codebooks. So it is important to compensate the information loss during quantization. To this end, a milestone work, i.e., Hamming embedding (HE) has been dominantly employed. Proposed by Jgou et al. , HE greatly improves the discriminative ability of visual words under medium-sized codebooks. HE first maps a SIFT descriptor $ f\in\mathbb{R}^p $ from the $ p $ -dimensional space to a $ p_b $ -dimensional space:
在中等规模编码本下视觉词汇的区分度介于小规模编码本和大规模编码本之间。因此,对量化过程中带来的信息损失需要进行补偿。最终,汉明嵌入(HE)这个里程碑式的工作成为实践中的主流算法。
HE算法由Jégou等人在论文《Hamming embedding and weak geometric consistency for large scale image search》中提出,它提升了中等规模编码本下视觉词汇的区分能力,也是首次将一个SIFT描述符$ f\in\mathbb{R}^p $从$ p $ -dimensional空间映射到$ p_b $ -dimension空间:
$$ x = P\cdot f = (x_1,...x_{p_b}), $$
where $ P\in\mathbb{R}^p_b\times p $ is a projecting matrix, and $ x $ is a low-dimensional vector. By creating a matrix of random Gaussian values and applying a QR factorization to it, matrix $ P $ is taken as the first $ p_b $ rows of the resulting orthogonal matrix. To binarize $ x $ , Jegou et al. propose to compute the median vector $ \overline{x_i} = (\overline{x_{1,i}},...,\overline{x_{p_b, i}}) $ of the low-dimensional vector using descriptors falling in each Voronoi cell $ c_i $ . Given descriptor $ f $ and its projected vector $ x $ , HE computes its visual word $ c_t $ , and the HE binary vector is computed as:
其中$ P\in\mathbb{R}^p_b\times p $是投影矩阵,而$ x $是低维向量。通过创建随机高斯值的矩阵并将QR分解应用于它,矩阵$ P $作为所得到的正交矩阵的第一$ p_b $行。对于二进制化$ x $,,Jegou等人提出在每个Voronoi单元$ c_i $中使用描述符下降法来计算低维向量的中值向量$\overline{x_i},...,\overline{x_{p_b, i}},...,\overline{x_{p_b, i}}) $。给定描述符$ f $及其投影矢量$ x $,,HE计算它的视觉词汇$ c_t $,,HE二值向量计算公式如下:
$$ b_j(x) =
1 , if x_j > \overline{x_{j, t}},
0,otherwise $$
where $ b(x) = (b_1(x),...,b_{p_b}(x)) $ is the resulting HE vector of dimension $ p_b $ . The binary feature $ b(x) $ serves as a secondary check for feature matching. A pair of local features are a true match when two criteria are satisfied: 1) identical visual words and 2) small Hamming distance between their HE signatures. The extension of HE estimates the matching strength between feature $ f_1 $ and $ f_2 $ reversely to the Hamming distance by an exponential function:
,其中$ b(x) = (b_1(x),...,b_{p_b}(x)) $是由此产生的尺寸$ p_b $的载体。二进制功能$ b(x) $作为特征匹配的第二重校验。当满足以下两个标准时,一对局部特征可以认为是匹配的:1) 它们是同一个视觉词汇;2) 它们的HE哈希值距离很小。HE的扩展方法通过指数函数估计特征。他估计特征$ f_1 $和$ f_2 $与Hamming距离的匹配强度::
$$ w_{HE}(f_1, f_2) = \exp(-\frac{\mathcal{H}(b(x_1), b(x_2))}{2\gamma^2}), $$
where $ b(x_1) $ and $ b(x_2) $ are the HE binary vector of $ f_1 $ and $ f_2 $ , respectively, $ \mathcal{H}(\cdot,\cdot) $ computes the Hamming distance between two binary vectors, and $ \gamma $ is a weighting parameter. As shown in Fig. performance_improvement, HE and its weighted version improves accuracy considerably in 2008 and 2010.
,其中$ b(x_1) $和$ b(x_2) $是$ f_1 $和$ f_2 $的二进制向量,分别,$\mathcal{H}(\cdot,\cdot) $计算两个二值向量的汉明距离,$\gamma $是权重参数。如图6所示,HE及其加权版本在2008和2010年准确率大大提高。
Applications of HE include video copy detection , image classification and re-ranking . For example, in image classification, patch matching similarity is efficiently estimated by HE which is integrated into linear kernel-based SVM . In image re-ranking, Tolias et al. use lower HE thresholds to find strict correspondences which resemble those found by RANSAC, and the resulting image subset is more likely to contain true positives for query reformulation. The improvement over HE has been observed in a number of works, especially from the view of match kernel . To reduce the information loss on the query side, Jain et al. propose a vector-to-binary distance comparison. It exploits the vector-to-hyperplane distance while retaining the efficiency of the inverted index. Further, Qin et al. design a higher-order match kernel within a probabilistic framework and adaptively normalize the local feature distances by the distance distribution of false matches. This method is in the spirit similar to , in which the word-word distance, instead of the feature-feature distance , is normalized, according to the neighborhood distribution of each visual word. While the average distance between a word to its neighbors is regularized to be almost constant in , the idea of democratizing the contribution of individual embeddings has later been employed in . In , Tolias et al. show that VLAD and HE share similar natures and propose a new match kernel which trades off between local feature aggregation and feature-to-feature matching, using a similar matching function to . They also demonstrate that using more bits (e.g., 128) in HE is superior to the original 64 bits scheme at the cost of decreased efficiency. Even more bits (256) are used in , but this method may be prone to relatively low recall.
HE应用于视频拷贝检测、图像分类和重排序等场合。例如,在图像分类中,例如,在图像分类中,将HE集成到基于线性核的SVM中,有效地提高了了图像块匹配相似度的速度。在图像重排序任务中,Tolias等人使用更低的HE阈值来找到类似于RANSAC得到的(图像局部特征)严格对应关系,并且得到的图像子集更可能包含真正应查得的图像。
有很多工作都对HE提升,特别是从匹配核的角度对HE进行改进。为了减少查询上的信息损失,Jain等人提出一种矢量二值距离比较法。它利用向量到超平面距离,同时保持倒排索引的效率。更进一步地,Qin等人在概率框架内设计一个高阶匹配核函数,并通过假匹配的距离分布自适应地标准化局部特征距离。该方法的思想类似于《Accurate image search using the contextual dissimilarity measure》,其中,根据每个视觉词汇的邻域分布,将字-字距离而不是特征-特征距离归一化。虽然在《Accurate image search using the contextual dissimilarity measure》中,一个词与它的邻居之间的平均距离被规范为几乎为恒定值,但后来在《Triangulation embedding and democratic aggregation for image search》中采用了将单个嵌入向量的贡献民主化的想法。在《To aggregate or not to aggregate: Selective match kernels for image search》中,Tolias等人表明VLAD和HE向量具有相似的性质,并提出了一种新的匹配核函数,它在局部特征聚合和特征到特征匹配之间进行折衷,使用和《Query adaptive similarity for large scale object retrieval》相似的匹配函数。他们还证明了在HE中使用更多比特位(例如128bit)优于原始64比特方案,代价是效率的降低。在《Scalar quantization for large scale image search》中使用了更多的比特位(256),但是这种方法可能使得结果的召回率相对较低。
Fig. 5: False match removal by (A) HE [13], (B) local-local feature fusion, and (C) local-global feature fusion.
图5:通过(A)HE [ 13 ],(B)局部-局部特征融合和(C)局部-全局特征融合消除错误匹配。
Other Important Issues 其他重要问题
Feature FusionLocal-local fusion. 功能融合
Local-local fusion. A problem with the SIFT feature is that only local gradient description is provided. Other discriminative information encoded in an image is still not leveraged. In Fig. 5 (B), a pair of false matches cannot be rejected by HE due to their similarity in the SIFT space, but the fusion of other local (or regional) features may correct this problem. A good choice for local-local fusion is to couple SIFT with color descriptors. The usage of color-SIFT descriptors can partially address the trade-off between invariance and discriminative ability. Evaluation has been conducted on several recognition benchmarks [93] of the descriptors such as HSV-SIFT [94], HueSIFT [95] and OpponentSIFT [93]. Both HSV-SIFT and HueSIFT are scale-invariant and shift-invariant. OpponentSIFT describes all the channels in the opponent color space using the SIFT descriptor and is largely robust to the light color changes. In [93], OpponentSIFT is recommended when no prior knowledge about the datasets is available. In more recent works, the binary color signatures are stored in the inverted index [96], [31]. Despite the good retrieval accuracy on some datasets, the potential problem is that intensive variation in illumination may compromise the effectiveness of colors.
局部-局部特征融合。 SIFT特征的一个问题就是它只提供了局部梯度描述,在图像中编码的其他判别信息仍然没有被利用。在图5(B)中,由于一对错误匹配在SIFT空间中的相似性,因此这对匹配没有被HE编码拒绝,但是其他局部(或区域)特征的融合可以纠正这个问题。
将SIFT与颜色描述符耦合是局部-局部特征融合的一个好选择。颜色-SIFT描述符融合特征的使用可以部分地解决不变性和辨别能力之间的权衡问题。在几个基准识别测试集上已经对几个诸如HSV-SIFT,HueSIFT和OpponentSIFT几个融合特征进行了评估。HSV-SIFT和HueSIFT特征都属于尺度,平移不变性特征。OpponentSIFT使用SIFT描述符描述对立的颜色空间中的所有通道,并且对光照颜色变化大的图像具有很强的鲁棒性。在《Evaluating color descriptors for object and scene recognition》中认为OpponentSIFT是当有关数据没有先验知识时的优先选择。在最近的工作中,二进制颜色签名都存储在倒排索引中。尽管现有的图像检索方法在一些数据集上取得了很好的检索精度,但一个潜在不容忽视:照明的密集变化可能会有损颜色特征检索的有效性。
Local-global fusion. Local and global features describe images from different aspects and can be complementary. In Fig. 5 (C), when local (and regional) cues are not enough to reject a false match pair, it would be effective to further incorporate visual information from a larger context scale. Early and late fusion are two possible ways. In early fusion, the image neighborhood relationship mined by global features such as FC8 in AlexNet [6] is fused in the SIFT-based inverted index [72]. In late fusion, Zhang el al. [97] build an offline graph for each type of feature, which is subsequently fused during the online query. In an improvement of [97], Deng et al. [98] add weakly supervised anchors to aid graph fusion. Both works on the rank level. For score-level fusion, automatically learned category-specific attributes are combined with pre-trained category-level information [99]. Zheng et al. [86] propose the query-adaptive late fusion by extracting a number of features (local or global, good or bad) and weighting them in a query-adaptive manner.
局部-全局特征融合。 局部特征和全局特征从不同的角度来描述图像并互为补充。在图5(C)中,但局部(以及区域)信息不足以判断出一个错误的匹配对时,进一步整合更广的上下文尺度视觉信息是有效的。前期和后期融合是两种可能的方式。在前期融合中,图像邻域关系由如AlexNet中的FC8这样的全局特征挖掘出,并融合在基于SIFT的倒排索引中。在后期融合中,Zhang等人为每种类型的特征创建一个离线图,随后在在线查询期间进行特征融合。在对《Query specific fusion for image retrieval》中的方法进行改进时,Deng等人在《Visual reranking through weakly supervised multi-graph learning》提出增加弱监督锚(weakly supervised anchors)来协助图融合。两个工作都是在排序方面进行研究。对于分数级别的融合,将自动学习的类别的特定属性与预训练的类级别信息相结合。Zheng等人在《Query-adaptive late fusion for image search and person re-identification》中提出通过提取一些特征(局部或全局特征,好的或坏的特征)进行后期特征融合的自适应查询,并且以自适应查询的方式赋给特征相应的权重。
Geometric Matching 几何学上的匹配问题
A frequent concern with the BoW model is the lack of geometric constraints among local features. Geometric verification can be used as a critical pre-processing step various scenarios, such as query expansion [100, 101], feature selection [81], database-side feature augmentation [81], [33], large-scale object mining [102], etc. The most well-known method for global spatial verification is RANSAC [12]. It calculates affine transformations for each correspondence repeatedly which are verified by the number of inliers that fit the transformation. RANSAC is effective in re-ranking a subset of top-ranked images but has efficiency problems. As a result, how to efficiently and accurately incorporate spatial cues in the SIFT-based framework has been extensively studied.
A good choice is to discover the spatial context among local features. For example, visual phrases [103, 104, 105, 106] are generated among individual visual words to provide more strict matching criterion. Visual word co-occurrences in the entire image are estimated [107] and aggregated [108], while in [109, 110], [29] visual word clusters within local neighborhoods are discovered. Visual phrases can also be constructed from adjacent image patches [103], random spatial partitioning [106], and localized stable regions [29] such as MSER [28].
Another strategy uses voting to check geometric consistency. In the voting space, a bin with a larger value is more likely to represent the true transformation. An important work is weak geometrical consistency (WGC) [13], which focuses on the difference in scale and orientation between matched features. The space of difference is quantized into bins. Hough voting is used to locate the subset of correspondences similar in scale or orientation differences. Many later works can be viewed as extensions of WGC. For example, the method of Zhang et al. [21] can be viewed as WGC using x, y offsets instead of scale and orientation. This method is invariant to object translations, but may be sensitive to scale and rotation changes due to the rigid coordinate quantization. To regain the scale and the rotation variance, Shen et al. [111] quantize the angle and scale of the query region after applying several transformations. A drawback of [111] is that query time and memory cost are both increased. To enable efficient voting and alleviate quantization artifacts, Hough pyramid matching (HPM) [112] distributes the matches over a hierarchical partition of the transformation space. HPM trades off between flexibility and accuracy and is very efficient. Quantization artifact can also be reduced by allowing a single correspondence to vote for multiple bins [113]. HPM and [113] are much faster than RANSAC and can be viewed as extensions in the rotation and the scale invariance to the weak geometry consistency proposed along with Hamming Embedding [13]. In [114], a rough global estimate of orientation and scale changes is made by voting, which is used to verify the transformation obtained by the matched features. A recent method [115] combines the advantage of hypothesis-based methods such as RANSAC [12] and voting-based methods [112, 21, 113, 114]. Possible hypothesises are identified by voting and later verified and refined. This method inherits efficiency from voting and supports query expansion since it outputs an explicit transformation and a set of inliers.
BoW模型的一个常见问题是缺乏局部特征间的几何约束。几何验证可以用作各种场景的关键预处理步骤,例如拓展查询,特征选择,数据库端的特征增强,大规模咯物体挖掘等。著名的全局空间验证方法是RANSAC。RANSAC它重复计算每个对应的仿射变换,并通过适合变换的内点数来验证。RANSAC算法有效地重新排列排名最高的图像的子集,但存在效率问题。最终,如何在SIFT为基础的框架下有效、准确地结合空间信息被广泛地研究。
一个好的方法是研究局部特征间的空间上下文。例如,视觉短语在独立的视觉词汇中产生以提供更加精准的匹配规范。估计和聚合整个图像中的视觉词汇共现是一种研究思路,同时也有研究员研究视觉词汇在局部邻域中的聚类。视觉短语也可以通过临近图像块,随机空间分割和局部稳定区域(如MSER)的方式来组成。
另一种策略使用投票机制来检查几何一致性。在投票空间中,具有较大值的容器更可能代表真正的转换。其中一项重要的工作就是弱几何一致性(WGC),这种方法关注匹配特征在尺度和方向上的差异,不同空间则被量化到容器中。Hough投票方法被用来定位在规模或方向上相似或相异的子集。许多后来的研究工作可以看作是WGC的扩展。例如,Zhang等人的工作《Image retrieval with geometrypreserving visual phrases》可以被视为使用x,y偏移量而不是比例和方向的WGC方法。该方法具有目标平移不变性,但由于采用了刚体坐标量化,因此对尺度和旋转变化敏感。为了重新获得目标的尺度和旋转的不变性,Shen等人在《Object retrieval and localization with spatially-constrained similarity measure and k-nn re-ranking》中提出在应用多个变换后,量化查询区域的角度和尺度。Shen等人的这个方法的一个缺点就是,查询时间和存储效率的降低。实现高效的投票方法并减轻量化损失,论文《Hough pyramid matching: Speeded-up geometry re-ranking for large scale image retrieval》提出了霍夫金字塔匹配(HPM)方法,通过分层划分变换空间来分配匹配结果。HPM在灵活性和准确性之间取得了平衡,非常高效。还可以通过允许单个通信对多个容器进行投票来减少量化损失。HPM和这种方法都在速度上快于RANSAC算法,同时也可以被看作是对和HE一起提出的WGC在旋转和尺度不变性上的拓展。在《Pairwise geometric matching for large-scale object retrieval》中提出了一种基于投票的全局方向和尺度的粗略估计方法,以此来检验通过匹配特征得到的变形参数。《A vote-and-verify strategy for fast spatial verification in image retrieval》结合了基于假设的方法(如RANSAC)和基于投票的方法的优点,通过投票和后续的验证、精确微调来确定可能的假设。该方法保有了投票方法的效率,同时因为它的输出是显式变换和一组内值,因此还支持了查询扩展。
Query Expansion 拓展查询
As a post-processing step, query expansion (QE) significantly improves the retrieval accuracy. In a nutshell, a number of top-ranked images from the original rank list are employed to issue a new query which is in turn used to obtain a new rank list. QE allows additional discriminative features to be added to the original query, thus improving recall.
In instance retrieval, Chum et al. [100] are the first to exploit this idea. They propose the average query expansion (AQE) which averages features of the top-ranked images to issue the new query. Usually, spatial verification [12] is employed for re-ranking and obtaining the ROIs from which the local features undergo average pooling. AQE is used by many later works [17, 10, 24] as a standard tool. The recursive AQE and the scale-band recursive QE are effective improvement but incur more computational cost [100]. Four years later, Chum et al. [101] improve QE from the perspectives of learning background confusers, expanding the query region and incremental spatial verification. In [33], a linear SVM is trained online using the top-ranked and bottom-ranked images as positive and negative training samples, respectively. The learned weight vector is used to compute the average query. Other important extensions include “hello neighbor” based on reciprocal neighbors [116], QE with rank-based weighting [111], Hamming QE [89] (see Section 3.5), etc.
作为后处理步骤,拓展查询(QE)对提高检索的准确度很有帮助。简单地说,QE就是采用来自原始排名列表的多个排在前列的图像来发布新的查询,新的查询用于获得新的排名列表。QE可以增加额外的有区分度的特征到原始查询中,因此提高了召回率。
在实例检索任务中,Chum等人是第一个提出研究这项工作的。他们提出了平均拓展查询(AQE)方法,用排名靠前的图像的平均特征来发出新的查询。通常,空间验证用于重排序以及局部特征通过平均池化获得感兴趣区域。AQE被后来许多工作作为标准工具来使用。递归AQE和尺度-带递归QE是对AQE有效的改进,但它们的计算成本更大。四年后,Chum等从学习背景的混淆、扩展查询区域和增加空间验证的角度来改进QE。在《Three things everyone should know to improve object retrieval》中分别使用最靠前和最靠后图片作为训练正负样本。在线训练了一个线性支持向量机。学习到的权重向量用于计算平均查询。其他重要的QE算法的扩展包括基于互惠邻居思想的“hello neighbor”算法,基于排序的权重QE算法,汉明QE算法等。
Small Object Retrieval 小目标检索
Retrieving objects that cover a small portion of images is a challenging task due to 1) the few detected local features and 2) the large amount of background noise. The Instance Search task in the TRECVID campaign [117] and the task of logo retrieval are important venues/applications for this task.
Generally speaking, both TRECVID and logo retrieval can be tackled with similar pipelines. For keypoint-based methods, the spatial context among the local features is important to discriminative target objects from others, especially in cases of rigid objects. Examples include [118, 119, 120]. Other effective methods include burstiness handling [77] (discussed in Section 3.4.3), considering the different inlier ratios between the query and target objects [121], etc. In the second type of methods, effective region proposals [122] or multi-scale image patches [123] can be used as object region candidates. In [123], a recent state-of-the-art method, a regional diffusion mechanism based on neighborhood graphs is proposed to further improve the recall of small objects.
检索图像中的一小部分对象是一项具有挑战性的任务由于1) 检测到的局部特征数量少,2) 背景噪声过大。TRECVID活动中的实例检索任务和logo检索任务都是小目标检索任务中的重要竞赛/应用。
一般来数,TRECVID任务和logo检索都可以用相似的流程来处理。对于基于关键点的检索方法,局部特征之间的空间上下文对区分目标是至关重要的,特别是对要求苛刻的小目标检索任务来说。其他有效的方法包括突发性处理,考虑查询对象和目标对象之间不同的比率。在第二种方法中,有效的可能区域或多尺度图像块可用作候选对象区域。在《Efficient diffusion on region manifolds: Recovering small objects with compact cnn representations》中,提出了一种基于邻域图的区域扩散机制,以进一步提高小对象的查全率,达到了当时最高水平。
CNN-based Image Retrieval 基于CNN的图像检索系统
CNN-based retrieval methods have constantly been proposed in recent years and are gradually replacing the hand-crafted local detectors and descriptors. In this survey, CNN-based methods are classified into three categories: using pre-trained CNN models, using fine-tuned CNN models and hybrid methods. The first two categories compute the global feature with a single network pass, and the hybrid methods may require multiple network passes (see Fig. 2).
基于CNN的图像检索方法近年来不断被提出,并且在逐渐取代基于手工检测器和描述符的方法。在这篇综述中,基于CNN的方法被分为三类:使用预训练的CNN模型,使用微调的CNN模型以及使用混合模型。前两类方法使用单向传递网络来提取全局特征,混合模型方法可能需要多个网络传递。如图2所示
Retrieval Using Pre-trained CNN Models 使用预训练CNN模型的图像检索系统
This type of methods is efficient in feature computation due to the single-pass mode. Given the transfer nature, its success lies in the feature extraction and encoding steps. We will first describe some commonly used datasets and networks for pre-training, and then the feature computation process.
由于预训练CNN模型是单通模式,因此这种方法在特征计算中非常高效。考虑到传输特性,它的成功在于特征提取和编码步骤。我们将首先描述一些常用的数据集和网络进行预训练,然后进行特征计算。
Pre-trained CNN Models 预训练的CNN模型
Popular CNN architectures. Several CNN models serve as good choices for extracting features, including AlexNet , VGGNet , GoogleNet and ResNet , which are listed in Table CNN-models. Briefly, CNN can be viewed as a set of non-linear functions and is composed of a number of layers such as convolution, pooling, non-linearities, etc. CNN has a hierarchical structure. From bottom to top layers, the image undergoes convolution with filters, and the receptive field of these image filters increases. Filters in the same layer have the same size but different parameters. AlexNet was proposed the earliest among these networks, which has five convolutional layers and three fully connected (FC) layers. It has 96 filters in the first layer of sizes $ 11\times11\times3 $ and has 256 filters of size $ 3\times 3\times 192 $ in the 5th layer. Zeiler et al. observe that the filters are sensitive to certain visual patterns and that these patterns evolve from low-level bars in bottom layers to high-level objects in top layers. For low-level and simple visual stimulus, the CNN filters act as the detectors in the local hand-crafted features, but for the high-level and complex stimulus, the CNN filters have distinct characteristics that depart from SIFT-like detectors. AlexNet has been shown to be outperformed by newer ones such as VGGNet, which has the largest number of parameters. ResNet and GoogleNet won the ILSVRC 2014 and 2015 challenges, respectively, showing that CNNs are more effective with more layers. A full review of these networks is beyond the scope of this paper, and we refer readers to , for details.
流行的CNN网络结构 AlexNet,VGGNet,GoogleNet以及ResNet这几个CNN网络适用于特征提取,详见表2.简单来说,CNN网络可以视为一系列非线性函数的集合,它由如卷积,池化,非线性等多个层组成。CNN是一个分层次的结构。自网络的底层到顶层,图像经过滤波器的卷积,同时这些图像滤波器的感受野随增长而增加。同一层的滤波器尺寸相同但是参数不同。AlxNet是这些网络中最早被提出的的,它有五个卷积层和三个全连接(FC)层。它的第一层大小96个11×11×3的滤波器,在第五层中有256个大小为3×3×192的滤波器。Zeiler等人观察到滤波器对某些视觉模式十分敏感,这些模式从底层的低级的图像纹理演变到顶层的高级的图像目标。对于低层次和简单的视觉刺激,CNN滤波器类似局部手工制作的特征中的检测器,但是对于高层次和复杂的刺激,CNN滤波器具有不同于SIFT类检测器的特质。AlxNET已被证明被新的的如具有最大数量参数的VGGNet超越。ResNet和GoogleNet分别赢得了ILSVRC 2014和2015的挑战,表明CNN网络的效果和网络层数成正比。如果要调研全部这些网络超出了本文的范围,我们建议读者参阅《Imagenet classification with deep convolutional neural networks》,《Return of the devil in the details: Delving deep into convolutional nets》和《Very deep convolutional networks for large-scale image recognition》中的细节。
Datasets for pre-training. Several large-scale recognition datasets are used for CNN pre-training. Among them, the ImageNet dataset [23] is mostly commonly used. It contains 1.2 million images of 1000 semantic classes and is usually thought of as being generic. Another data source for pre-training is the Places-205 dataset [129] which is twice as large as ImageNet but has five times fewer classes. It is a scene-centric dataset depicting various indoor and outdoor scenes. A hybrid dataset combining the Places-205 and the ImageNet datasets has also been used for pre-training [129]. The resulting HybridNet is evaluated in [130, 131, 125, 126] for instance retrieval.
用于预训练模型的数据集。 一些大规模的识别数据集被用于CNN网络的预训练。在其中,ImageNet数据集常被研究员拿来使用。它包含1000个语义类的120万个图像,并且通常被认为是具有普适性的。用于预训练模型的另一个数据集是PASES-205,它的数据规模是ImageNet的两倍但图像种类却要比ImageNet少五倍。它是一个以场景为主的数据集,描绘了各种室内场景和室外场景。在《Learning deep features for scene recognition using places database》中,混合了ImageNet和PASES-205的数据集也同样会被拿来用于模型的预训练。HybridNet在《Going deeper with convolutions》,《Deep residual learning for image recognition》,《Factors of transferability for a generic convnet representation》和《A practical guide to cnns and fisher vectors for image instance retrieval》中被用于实例检索任务的评估。
models | size | # layers | training Set | used in |
---|---|---|---|---|
OverFeat [132] | 144M | 6+3 | ImageNet | [7] |
AlexNet [6] | 60M | 5+3 | ImageNet | [22, 133] |
PlacesNet [129] | Places | [130, 131] | ||
HybridNet [129] | ImageNet+Places | [130, 131] | ||
VGGNet [124] | 138M | 13+3 | ImageNet | [10] |
GoogleNet [125] | 11M | 22 | ImageNet | [9] |
ResNet [126] | 44.6M | 101 | ImageNet | n.a |
TABLE II: Pre-trained CNN models that can be used.
The transfer issue. Comprehensive evaluation of various CNNs on instance retrieval has been conducted in several recent works [130, 131, 134]. The transfer effect is mostly concerned. It is considered in [130] that instance retrieval, as a target task, lies farthest from the source, i.e., ImageNet. Studies reveal some critical insights in the transfer process. First, during model transfer, features extracted from different layers exhibit different retrieval performance. Experiments confirm that the top layers may exhibit lower generalization ability than the layer before it. For example, for AlexNet pre-trained on ImageNet, it is shown that FC6, FC7, and FC8 are in descending order regarding retrieval accuracy [130]. It is also shown in [134, 10] that the pool5 feature of AlexNet and VGGNet is even superior to FC6 when proper encoding techniques are employed. Second, the source training set is relevant to retrieval accuracy on different datasets. For example, Azizpour et al. [130] report that HybridNet yields the best performance on Holidays after PCA. They also observe that AlexNet pre-trained on ImageNet is superior to PlacesNet and HybridNet on the Ukbench dataset [11] which contains common objects instead of architectures or scenes. So the similarity of the source and target plays a critical role in instance retrieval when using a pre-trained CNN model.
迁移问题。 最近的一些工作综合评估了各种CNN网络在实例检索任务中的表现,模型迁移是大家都比较关心的一个问题。在《Factors of transferability for a generic convnet representation》中将实例检索任务认为是距离原始数据集最远的(计算机视觉)目标。首先,在模型迁移过程中,从不同层提取的特征表现出不同的检索性能。实验表明高层网络的泛化能力要低于较低层的网络。例如,在ImageNet上预训练的网络AlexNet表明,FC6、FC7和FC8在检索精度上呈递减顺序。《Particular object retrieval with integral max-pooling of cnn activations》和《Good practice in cnn feature transfer》也指出,当使用适当的编码技术时,AlexNet和VGGNet的pool5层特征甚至优于FC6层特征。其次,当原始的训练集不同时,模型的准确率也会受到影响。例如,Azizpour等人指出HybridNet在Holidays数据集上展现出的性能要劣于PCA。他们同样发现在ImageNet上预训练的AlexNet模型在包含常见物体而非建筑场景图像的Ukbench数据集上的表现要好于PlacesNet和HybridNet(译者注:AlexNet,PlacesNet和HybridNet预训练模型使用的训练集不同)。因此,当使用预训练的CNN模型时,源和目标的相似度在实例检索中起着至关重要的作用。
Feature Extraction 特征提取
The most straightforward idea is to extract the descriptor from the fully-connected (FC) layer of the network , e.g., the 4,096-dim FC6 or FC7 descriptor in AlexNet. The FC descriptor is generated after layers of convolutions with the input image, has a global receptive field, and thus can be viewed as a global feature. It yields fair retrieval accuracy under Euclidean distance and can be improved with power normalization . Many recent retrieval methods focus on local descriptors in the intermediate layers. In these methods, lower-level convolutional filters (kernels) are used to detect local visual patterns. Viewed as local detectors, these filters have a smaller receptive field and are densely applied on the entire image. Compared with the global FC feature, local detectors are more robust to image transformations such as truncation and occlusion, in ways that are similar to the local invariant detectors (Section local_feature_extraction).
FC描述符。 最直接的想法就是网络的全连接层(FC layer)提取描述符,在AlexNet中就是FC6或FC7中的描述符。FC描述符是在与输入图像卷积的层之后生成的,具有全局表示性,因此可以被视为全局特征。它在欧几里德距离下产生较好的检索精度,并且可以使用指数归一化来提高检索精度。
中间局部特征。 许多最新的检索方法专注于研究中间层的描述符。在这种方法中,低层网络的卷积核用于检测局部视觉模式。作为局部检测器,这些滤波器具有较小的感受野并密集地应用于整张图像。与全局FC特征相比,局部检测器对于诸如截断和遮挡的图像变换更鲁棒,其方式类似于局部不变量检测器(Local_Feature_extraction)。
Local descriptors are tightly coupled with these intermediate local detectors, i.e., they are the responses of the input image to these convolution operations. In other words, after the convolutions, the resulting activation maps can be viewed as a feature ensemble, which is called the column feature in this survey. For example in AlexNet , there are $ n=96 $ detectors (convolutional filters) in the 1st convolutional layer. These filters produces $ n=96 $ heat maps of size $ 27\times27 $ (after max pooling). Each pixel in the maps has a receptive field of $ 19\times19 $ and records the response of the image *w.r.t* the corresponding filter . The column feature is therefore of size $ 1\times1\times96 $ (Fig. pipeline) and can be viewed as a description of a certain patch in the original image. Each dimension of this descriptor denotes the level of activation of the corresponding detector and resembles the SIFT descriptor to some extent. The column feature initially appears in , where Razavian *et al.* first do max-pooling over regularly partitioned windows on the feature maps and then concatenate them across all filter responses, yielding column-like features. In , column features from multiple layers of the networks are concatenated, forming the hypercolumn feature.
局部描述符与这些中间局部检测器紧密耦合,换而言之,它们是输入图像对这些卷积运算的响应。另一方面,在卷积运算后等到的激活图层可以看做是特征的集成,在这篇综述中将其称为“列特征”。例如,在AlexNet中第一层有$ n=96 $探测器(卷积滤波器)。这些过滤器产生$ n=96 $大小$ 27\times27 $的热力图(在最大池化后)。热力图中的每个像素点具有大小为$ 19\times19 $的感受野,同时记录了图像对滤波器的响应。因此列特征的大小是$ 1\times1\times96 $它可以看作是对原始图像中某个图像块的描述。该描述符的每个维度表示相应检测器的激活程度,并且在某种程度上类似于SIFT描述符。列特征最早出现在《Visual instance retrieval with deep convolutional networks》中,Razavian等人首先在分好块的特征图上进行最大池化,然后将它们连接在所有过滤器上,最终生成列特征。在《Hypercolumns for object segmentation and fine-grained localization》中,来自多层的列特征被连接形成“超列”(hypercolumn)特征。
Feature Encoding and Pooling 特征编码与池化
When column features are extracted, an image is represented by a set of descriptors. To aggregate these descriptors into a global representation, currently two strategies are adopted: encoding and direct pooling (Fig. pipeline).
Encoding A set of column features resembles a set of SIFT features. So s