[论文翻译]大规模学习短语的密集表示


原文地址:https://arxiv.org/pdf/2012.12624v3


Learning Dense Representations of Phrases at Scale

大规模学习短语的密集表示

Abstract

摘要

Open-domain question answering can be reformulated as a phrase retrieval problem, without the need for processing documents on-demand during inference (Seo et al., 2019). However, current phrase retrieval models heavily depend on sparse representations and still underperform retriever-reader approaches. In this work, we show for the first time that we can learn dense representations of phrases alone that achieve much stronger performance in opendomain QA. We present an effective method to learn phrase representations from the supervision of reading comprehension tasks, coupled with novel negative sampling methods. We also propose a query-side fine-tuning strategy, which can support transfer learning and reduce the discrepancy between training and inference. On five popular open-domain QA datasets, our model Dense Phrases improves over previous phrase retrieval models by 15\mathrm{%-} 25 absolute accuracy and matches the performance of state-of-the-art retriever-reader models. Our model is easy to parallel ize due to pure dense representations and processes more than 10 questions per second on CPUs. Finally, we directly use our pre-indexed dense phrase representations for two slot filling tasks, showing the promise of utilizing Dense Phrases as a dense knowledge base for downstream tasks.1

开放域问答可以重新定义为短语检索问题,无需在推理时实时处理文档 (Seo et al., 2019) 。然而当前短语检索模型严重依赖稀疏表示,性能仍逊于检索器-阅读器架构。本研究首次证明仅通过短语的稠密表示 (dense representation) 就能在开放域问答中实现更强性能。我们提出从阅读理解任务监督中学习短语表示的有效方法,配合创新的负采样策略,并设计了查询端微调方案以支持迁移学习并减少训练与推理的差异。在五个主流开放域问答数据集上,我们的Dense Phrases模型将短语检索性能绝对值提升15%-25%,达到最先进检索器-阅读器模型的水平。由于采用纯稠密表示,模型易于并行化,在CPU上每秒可处理超过10个问题。最后,我们直接将预索引的稠密短语表示应用于两个槽填充任务,证明Dense Phrases作为稠密知识库支持下游应用的潜力。[20]

1 Introduction

1 引言

Open-domain question answering (QA) aims to provide answers to natural-language questions using a large text corpus (Voorhees et al., 1999; Ferrucci et al., 2010; Chen and Yih, 2020). While a dominating approach is a two-stage retriever-reader approach (Chen et al., 2017; Lee et al., 2019; Guu et al., 2020; Karpukhin et al., 2020), we focus on a recent new paradigm solely based on phrase retrieval (Seo et al., 2019; Lee et al., 2020). Phrase retrieval highlights the use of phrase representations and finds answers purely based on the similarity search in the vector space of phrases.2 Without relying on an expensive reader model for processing text passages, it has demonstrated great runtime efficiency at inference time.

开放域问答 (Open-domain QA) 旨在通过大型文本语料库回答自然语言问题 (Voorhees et al., 1999; Ferrucci et al., 2010; Chen and Yih, 2020)。虽然主流方法是两阶段的检索-阅读器架构 (Chen et al., 2017; Lee et al., 2019; Guu et al., 2020; Karpukhin et al., 2020),但我们专注于近期完全基于短语检索的新范式 (Seo et al., 2019; Lee et al., 2020)。短语检索强调使用短语表征,并完全基于短语向量空间的相似性搜索来寻找答案。该方法无需依赖处理文本段的昂贵阅读器模型,在推理时展现出极高的运行时效率。

Despite great promise, it remains a formidable challenge to build vector representations for every single phrase in a large corpus. Since phrase representations are decomposed from question represent at ions, they are inherently less expressive than cross-attention models (Devlin et al., 2019). Moreover, the approach requires retrieving answers correctly out of billions of phrases (e.g., 6×1010 phrases in English Wikipedia), making the scale of the learning problem difficult. Consequently, existing approaches heavily rely on sparse representations for locating relevant documents and paragraphs while still falling behind retriever-reader models (Seo et al., 2019; Lee et al., 2020).

尽管前景广阔,为海量语料库中的每个短语构建向量表示仍是一项艰巨挑战。由于短语表示是从问题表示中分解而来,其表达能力天生弱于交叉注意力模型 (Devlin et al., 2019) 。此外,该方法需要从数十亿短语中准确检索答案 (例如英文维基百科中的 6×1010 个短语),导致学习问题的规模难以驾驭。因此,现有方法严重依赖稀疏表示来定位相关文档和段落,但性能仍落后于检索-阅读器模型 (Seo et al., 2019; Lee et al., 2020)。

In this work, we investigate whether we can build fully dense phrase representations at scale for opendomain QA. First, we aim to learn strong phrase representations from the supervision of reading comprehension tasks. We propose to use data augmentation and knowledge distillation to learn better phrase representations within a single passage. We then adopt negative sampling strategies such as inbatch negatives (Henderson et al., 2017; Karpukhin et al., 2020), to better discriminate the phrases at a larger scale. Here, we present a novel method called pre-batch negatives, which leverages preceding mini-batches as negative examples to compensate the need of large-batch training. Lastly, we present a query-side fine-tuning strategy that drastically improves phrase retrieval performance and allows for transfer learning to new domains, without re-building billions of phrase representations.

在本研究中,我们探讨了能否为开放域问答构建大规模全稠密短语表征。首先,我们致力于从阅读理解任务的监督中学习强表征短语。我们提出使用数据增强和知识蒸馏技术,在单篇文章内学习更优的短语表征。随后采用负采样策略(如批次内负例)[Henderson et al., 2017; Karpukhin et al., 2020],以更大规模区分短语。本文创新性地提出预批次负例方法,利用前序小批次作为负样本,弥补大批次训练的需求。最后,我们提出查询端微调策略,显著提升短语检索性能,并实现跨领域迁移学习而无需重建数十亿短语表征。

Table 1: Retriever-reader and phrase retrieval approaches for open-domain QA. The retriever-reader approach retrieves a small number of relevant documents or passages from which the answers are extracted. The phrase retrieval approach retrieves an answer out of billions of phrase representations pre-indexed from the entire corpus. Appendix B provides detailed benchmark specification. The accuracy is measured on the test sets in the opendomain setting. NQ: Natural Questions.

表1: 开放域问答的检索器-阅读器与短语检索方法对比。检索器-阅读器方法从少量相关文档或段落中检索并提取答案,短语检索方法则从预索引的数十亿短语表征中直接检索答案。附录B提供详细的基准测试规范,准确率指标基于开放域设置的测试集测得。NQ: Natural Questions数据集。

类别 模型 稀疏? 存储(GB) 查询数/秒(GPU, CPU) NQ(准确率) SQuAD(准确率)
检索器-阅读器 DrQA (Chen et al., 2017) 26 1.8, 0.6 29.8
BERTSerini (Yang et al., 2019) 21 2.0, 0.4 38.6
ORQA (Lee et al., 2019) × 18 8.6, 1.2 33.3 20.2
REALMNews (Guu et al., 2020) × 18 8.4, 1.2 40.4
DPR-multi (Karpukhin et al., 2020) 76 0.9, 0.04 41.5 24.1
短语检索 DenSPI (Seo et al., 2019) 1,200 2.9, 2.4 8.1 36.2
DenSPI + Sparc (Lee et al., 2020) 1,547 2.1, 1.7 14.5 40.7
DensePhrases (Ours) 320 20.6, 13.6 40.9 38.0

As a result, all these improvements lead to a much stronger phrase retrieval model, without the use of any sparse representations (Table 1). We evaluate our model, Dense Phrases, on five standard open-domain QA datasets and achieve much better accuracies than previous phrase retrieval models (Seo et al., 2019; Lee et al., 2020), with 15 25 absolute improvement on most datasets. Our model also matches the performance of state-ofthe-art retriever-reader models (Guu et al., 2020; Karpukhin et al., 2020). Due to the removal of sparse representations and careful design choices, we further reduce the storage footprint for the full English Wikipedia from 1.5TB to 320GB, as well as drastically improve the throughput.

因此,这些改进共同打造了一个更强大的短语检索模型,且无需使用任何稀疏表示 (表 1)。我们在五个标准开放域问答数据集上评估了Dense Phrases模型,其准确率显著超越此前所有短语检索模型 (Seo et al., 2019; Lee et al., 2020),在多数数据集上实现了15%-25%的绝对提升。我们的模型性能也可比肩最先进的检索-阅读器模型 (Guu et al., 2020; Karpukhin et al., 2020)。通过消除稀疏表示和精心的设计选择,我们将英文维基百科的存储占用从1.5TB压缩至320GB,同时大幅提升了吞吐量。

Finally, we envision that Dense Phrases acts as a neural interface for retrieving phrase-level knowledge from a large text corpus. To showcase this possibility, we demonstrate that we can directly use Dense Phrases for fact extraction, without rebuilding the phrase storage. With only fine-tuning the question encoder on a small number of subjectrelation-object triples, we achieve state-of-the-art performance on two slot filling tasks (Petroni et al., 2021), using less than 5 of the training data.

最后,我们设想 Dense Phrases 可作为从大型文本语料库中检索短语级知识的神经接口。为展示这一可能性,我们证明可以直接使用 Dense Phrases 进行事实抽取,而无需重建短语存储库。仅需在少量主语-关系-宾语三元组上微调问题编码器,我们就在两个槽填充任务 (Petroni et al., 2021) 中实现了最先进的性能,且训练数据使用量不足 5

2 Background

2 背景

We first formulate the task of open-domain question answering for a set of K documents D= d1,,dK . We follow the recent work (Chen et al., 2017; Lee et al., 2019) and treat all of English Wikipedia as D , hence K5×106 . However, most approaches—including ours—are generic and could be applied to other collections of documents.

我们首先针对一组K篇文档D=d1,,dK定义开放域问答任务。遵循近期研究 (Chen et al., 2017; Lee et al., 2019) 的做法,我们将英文维基百科整体视为D,此时K5×106。不过大多数方法(包括我们的方法)都具有通用性,可应用于其他文档集合。

The task aims to provide an answer ˆa for the input question q based on D . In this work, we focus on the extractive QA setting, where each answer is a segment of text, or a phrase, that can be found in D . Denote the set of phrases in D as S(D) and each phrase skS(D) consists of contiguous words wstart(k),,wend(k) in its document ddoc(k) . In practice, we consider all the phrases up to L=20 words in D and S(D) comprises a large number of 6×1010 phrases. An extractive QA system returns a phrase ˆs=argmaxsS(D)f(s|D,q) where f is a scoring function. The system finally maps ˆs to an answer string ˆa :TEXT(ˆs)=ˆa and the evaluation is typically done by comparing the predicted answer ˆa with a gold answer a .

该任务旨在基于输入问题 q 和数据集 D 提供答案 ˆa。在本研究中,我们聚焦于抽取式问答场景,其中每个答案均为可在 D 中找到的文本片段或短语。将 D 中的短语集合记为 S(D),每个短语 skS(D) 由其所属文档 ddoc(k) 中的连续词序列 wstart(k),,wend(k) 构成。实践中,我们考虑 D 中所有长度不超过 L=20 个单词的短语,此时 S(D) 包含约 6×1010 个短语。抽取式问答系统返回短语 ˆs=argmaxsS(D)f(s|D,q),其中 f 为评分函数。系统最终将 ˆs 映射为答案字符串 ˆa :TEXT(ˆs)=ˆa,并通过比较预测答案 ˆa 与标准答案 a 进行评估。

Although we focus on the extractive QA setting, recent works propose to use a generative model as the reader (Lewis et al., 2020; Izacard and Grave, 2021), or learn a closed-book QA model (Roberts et al., 2020), which directly predicts answers without using an external knowledge source. The extractive setting provides two advantages: first, the model directly locates the source of the answer, which is more interpret able, and second, phraselevel knowledge retrieval can be uniquely adapted to other NLP tasks as we show in §7.3 .

虽然我们聚焦于抽取式问答(extractive QA)场景,但近期研究提出了使用生成式模型作为阅读器(Lewis等人,2020;Izacard和Grave,2021),或训练闭卷问答模型(Roberts等人,2020)的方案,这些模型无需借助外部知识源即可直接预测答案。抽取式设置具备两大优势:首先,模型能直接定位答案来源,更具可解释性;其次,短语级知识检索可独特地适配其他NLP任务,如我们在§7.3节所示。

Retriever-reader. A dominating paradigm in open-domain QA is the retriever-reader approach (Chen et al., 2017; Lee et al., 2019; Karpukhin et al., 2020), which leverages a firststage document retriever fretr and only reads top KK documents with a reader model fread The scoring function f(sD,q) is decomposed as:

检索器-阅读器。开放域问答中的主流范式是检索器-阅读器方法 (Chen et al., 2017; Lee et al., 2019; Karpukhin et al., 2020),该方法利用第一阶段的文档检索器 fretr,并仅通过阅读器模型 fread 处理前 KK 篇文档。评分函数 f(sD,q) 分解为:

图片.png

where j1,,jK1,,K and if s S(dj1,,djK) , the score will be 0. It can easily adapt to passages and sentences (Yang et al., 2019; Wang et al., 2019). However, this approach suffers from error propagation when incorrect documents are retrieved and can be slow as it usually requires running an expensive reader model on every retrieved document or passage at inference time.

其中 j1,,jK1,,K ,且若 s S(dj1,,djK) ,则得分为0。该方法可轻松适配段落和句子 (Yang et al., 2019; Wang et al., 2019) 。但该方案存在错误传播问题:当检索到错误文档时,由于推理阶段通常需要在每个检索到的文档或段落上运行昂贵的阅读器模型 (reader model) ,其处理速度可能较慢。

Phrase retrieval. Seo et al. (2019) introduce the phrase retrieval approach that encodes phrase and question representations independently and performs similarity search over the phrase representations to find an answer. Their scoring function f is computed as follows:

短语检索。Seo等人(2019)提出了一种短语检索方法,该方法独立编码短语和问题表示,并通过在短语表示上执行相似性搜索来寻找答案。其评分函数f计算如下:
图片.png

where Es and Eq denote the phrase encoder and the question encoder respectively. As Es() and Eq() representations are de com pos able, it can support maximum inner product search (MIPS) and improve the efficiency of open-domain QA models. Previous approaches (Seo et al., 2019; Lee et al., 2020) leverage both dense and sparse vectors for phrase and question representations by taking their concatenation: Es(s,D)= [Esparse(s,D),Edense(s,D)] .3 However, since the sparse vectors are difficult to parallel ize with dense vectors, their method essentially conducts sparse and dense vector search separately. The goal of this work is to only use dense representations, i.e., Es(s,D)=Edense(s,D) , which can model f(sD,q) solely with MIPS, as well as close the gap in performance.

其中 EsEq 分别表示短语编码器和问题编码器。由于 Es()Eq() 的表示是可分解的,因此可以支持最大内积搜索 (MIPS) 并提升开放域问答模型的效率。先前的方法 (Seo et al., 2019; Lee et al., 2020) 通过拼接稠密向量和稀疏向量来构建短语和问题的表示:Es(s,D)= [Esparse(s,D),Edense(s,D)]。然而由于稀疏向量难以与稠密向量并行处理,这些方法本质上需要分别进行稀疏向量和稠密向量的搜索。本文的目标是仅使用稠密表示,即 Es(s,D)=Edense(s,D),从而仅通过 MIPS 建模 f(sD,q),同时缩小性能差距。

3 Dense Phrases

3 密集短语

3.1 Overview

3.1 概述

We introduce Dense Phrases, a phrase retrieval model that is built on fully dense representations. Our goal is to learn a phrase encoder as well as a question encoder, so we can pre-index all the possible phrases in D , and efficiently retrieve phrases for any question through MIPS at testing time. We outline our approach as follows:

我们推出Dense Phrases,这是一种基于全密集表示的短语检索模型。我们的目标是学习一个短语编码器和一个问题编码器,从而能够预索引 D 中所有可能的短语,并在测试时通过MIPS高效检索出任意问题对应的短语。具体方法如下:

Before we present the approach in detail, we first describe our base architecture below.

在详细介绍方法之前,我们先描述一下基础架构。

3.2 Base Architecture

3.2 基础架构

Our base architecture consists of a phrase encoder Es and a question encoder Eq . Given a passage p=w1,,wm , we denote all the phrases up to L tokens as S(p) . Each phrase sk has start and end in- dicies start (k) and end(k) and the gold phrase is sS(p) . Following previous work on phrase or span representations (Lee et al., 2017; Seo et al., 2018), we first apply a pre-trained language model $\mathcal{M}{p}toobtaincontextualizedwordrepresentationsforeachpassagetoken:\mathbf{h}{1},\ldots,\mathbf{h}{m}\in\mathbb{R}^{d}.Then,wecanrepresenteachphrases{k}\in S(p)$ as the concatenation of corresponding start and end vectors:

我们的基础架构包含一个短语编码器 Es 和一个问题编码器 Eq。给定段落 p=w1,,wm,我们将所有不超过 L 个token的短语记为 S(p)。每个短语 sk 具有起始和结束索引 start (k)end(k),黄金短语为 sS(p)。遵循先前关于短语或片段表示的研究 (Lee et al., 2017; Seo et al., 2018),我们首先应用预训练语言模型 $\mathcal{M}{p}token\mathbf{h}{1},\ldots,\mathbf{h}{m}\in\mathbb{R}^{d}s{k}\in S(p)$ 表示为对应起始和结束向量的拼接:
图片.png

A great advantage of this representation is that we eventually only need to index and store all the word vectors (we use W(D) to denote all the words in D) , instead of all the phrases S(D) , which is at least one magnitude order smaller.

这种表示方法的一大优势是,最终我们只需索引和存储所有词向量(用 W(D) 表示数据集 D 中的所有词),而非所有短语 S(D),其数据量至少小一个数量级。

Similarly, we need to learn a question encoder Eq() that maps a question $q=\tilde{w}{1},\ldots,\tilde{w}{n}toavectorofthesamedimensionasE_{s}(\cdot).Sincethestartandendrepresentationsofphrasesareproducedbythesamelanguagemodel,weuseanothertwodifferentpretrainedencoders\mathcal{M}{q,\mathrm{start}}and\mathcal{M}{q,\mathrm{end}}todifferentiatethestartandendpositions.Weapply{\mathcal{M}}{q,{\mathrm{start}}}and\mathcal{M}{q,\mathrm{end}}onqseparatelyandobtainrepresentations\mathbf{q}^{\mathrm{start}}andqendtakenfromthe[CLS]tokenrepresentationsrespectively.Finally,E_{q}(\cdot)$ simply takes their concatenation:

同样,我们需要学习一个问句编码器 Eq(),将问句 $q=\tilde{w}{1},\ldots,\tilde{w}{n}E_{s}(\cdot)使\mathcal{M}{q,\mathrm{start}}\mathcal{M}{q,\mathrm{end}}q{\mathcal{M}}{q,{\mathrm{start}}}\mathcal{M}{q,\mathrm{end}}[CLS]token\mathbf{q}^{\mathrm{start}}\mathbf{q}^{\mathrm{end}}E_{q}(\cdot)$ 只需取它们的拼接结果:

Figure 1: An overview of Dense Phrases. (a) We learn dense phrase representations in a single passage (§4.1) along with in-batch and pre-batch negatives (§4.2,§4.3) . (b) With the top k retrieved phrase representations from the entire text corpus (§5), we further perform query-side fine-tuning to optimize the question encoder (§6). During inference, our model simply returns the top-1 prediction.

图 1: Dense Phrases 概述。(a) 我们在单段落中学习密集短语表示 (§4.1) 以及批次内和预批次负样本 (§4.2,§4.3)。(b) 通过从整个文本语料库中检索前 k 个短语表示 (§5),我们进一步执行查询端微调以优化问题编码器 (§6)。在推理阶段,我们的模型仅返回 top-1 预测结果。

图片.png

Note that we use pre-trained language models to initialize $\mathcal{M}{p},\mathcal{M}{q,\mathrm{start}}and\mathcal{M}{q,\mathrm{end}}andtheyarefinetunedwiththeobjectivesthatwewilldefinelater.Inourpilotexperiments,wefoundthatSpanBERT(Joshietal.,2020)leadstosuperiorperformancecomparedtoBERT(Devlinetal.,2019).SpanBERTisdesignedtopredicttheinformationintheentirespanfromitstwoendpoints,thereforeitiswellsuitedforourphraserepresentations.Inourfinalmodel,weuseSpanBERTbasecasedasourbaseLMsforE{s}andE_{q},andhenced=768.$ .5 See Table 5 for an ablation study.

请注意,我们使用预训练语言模型来初始化 $\mathcal{M}{p}\mathcal{M}{q,\mathrm{start}}\mathcal{M}{q,\mathrm{end}}SpanBERT(Joshietal.,2020)BERT(Devlinetal.,2019)SpanBERT使SpanBERTbasecasedE{s}E_{q}d=768$。消融实验详见表 5。

4 Learning Phrase Representations

4 学习短语表示

In this section, we start by learning dense phrase representations from the supervision of reading comprehension tasks, i.e., a single passage p contains an answer a to a question q . Our goal is to learn strong dense representations of phrases for sS(p) , which can be retrieved by a dense represent ation of the question and serve as a direct answer (§4.1). Then, we introduce two different negative sampling methods (§4.2,§4.3) , which encourage the phrase representations to be better discriminated at the full Wikipedia scale. See Figure 1 for an overview of Dense Phrases.

在本节中,我们首先从阅读理解任务的监督中学习密集短语表示,即单个段落 p 包含问题 q 的答案 a。我们的目标是为 sS(p) 学习强密集短语表示,这些表示可以通过问题的密集表示进行检索并作为直接答案(见4.1节)。接着,我们介绍了两种不同的负采样方法 (§4.2,§4.3),这些方法鼓励短语表示在全维基百科规模上更好地区分。Dense Phrases的概述见图1。

4.1 Single-passage Training

4.1 单段落训练

To learn phrase representations in a single passage along with question representations, we first maximize the log-likelihood of the start and end positions of the gold phrase $s^{}where\operatorname{TEXT}(s^{})=a^{*}$ . The training loss for predicting the start position of a phrase given a question is computed as:

为了在单段落中学习短语表示与问题表示,我们首先最大化黄金短语$s^{}\operatorname{TEXT}(s^{})=a^{*}$)起止位置的对数似然。给定问题时预测短语起始位置的训练损失计算如下:

图片.png

We can define Lend in a similar way and the final loss for the single-passage training is

我们可以用类似的方式定义 Lend,单段落训练的最终损失为

图片.png

This essentially learns reading comprehension without any cross-attention between the passage and the question tokens, which fully decomposes phrase and question representations.

这本质上是在没有任何段落和问题token之间的交叉注意力的情况下学习阅读理解,从而完全分解了短语和问题的表示。

Data augmentation Since the contextual i zed word representations $\mathbf{h}{1},\ldots,\mathbf{h}{m}areencodedinaqueryagnosticway,theyarealwaysinferiortoquerydependentrepresentationsincrossattentionmodels(Devlinetal.,2019),wherepassagesarefedalongwiththequestionsconcatenatedbyaspecialtokensuchas[SEP].Wehypothesizethatonekeyreasonfortheperformancegapisthatreadingcomprehensiondatasetsonlyprovideafewannotatedquestionsineachpassage,comparedtothesetofpossibleanswerphrases.Learningfromthissupervisionisnoteasytodifferentiatesimilarphrasesinonepassage(e.g.,s^{*}=Charles,PrinceofWalesandanothers=P_{\mathrm{}}rinceGeorgeforaquestionq=$ Who is next in line to be the monarch of England?).

数据增强
由于上下文词表示 $\mathbf{h}{1},\ldots,\mathbf{h}{m}(crossattentionmodels)(Devlinetal.,2019)token([SEP])(q=s^{*}=s=P_{\mathrm{}}$ 乔治王子)。

Following this intuition, we propose to use a simple model to generate additional questions for data augmentation, based on a T5-large model (Raffel et al., 2020). To train the question genera- tion model, we feed a passage p with the gold answer s highlighted by inserting surrounding special tags. Then, the model is trained to maximize the log-likelihood of the question words of q . After training, we extract all the named entities in each training passage as candidate answers and feed the passage p with each candidate answer to generate questions. We keep the questionanswer pairs only when a cross-attention reading comprehension model6 makes a correct prediction on the generated pair. The remaining generated QA pairs ${(\bar{q}{1},\bar{s}{1}),(\bar{q}{2},\bar{s}{2}),\ldots,(\bar{q}{r},\bar{s}{r})}$ are directly augmented to the original training set.

基于这一思路,我们提出使用T5-large模型 (Raffel et al., 2020) 生成额外问题来实现数据增强。训练问题生成模型时,我们在文本段落p中通过插入特殊标记来突出标注正确答案s,然后训练模型最大化问题q的词对数似然。训练完成后,我们从每个训练段落中提取所有命名实体作为候选答案,并将段落p与每个候选答案输入模型生成问题。仅当交叉注意力阅读理解模型6对生成的问题-答案对做出正确预测时,我们才保留该问答对。最终保留的生成问答对${(\bar{q}{1},\bar{s}{1}),(\bar{q}{2},\bar{s}{2}),\ldots,(\bar{q}{r},\bar{s}{r})}$会被直接扩充至原始训练集。

Distillation We also propose improving the phrase representations by distilling knowledge from a cross-attention model (Hinton et al., 2015). We minimize the Kullback–Leibler divergence between the probability distribution from our phrase encoder and that from a standard SpanBERT-base QA model. The loss is computed as follows:

蒸馏
我们还提出通过从交叉注意力模型 (Hinton et al., 2015) 中蒸馏知识来改进短语表示。我们最小化短语编码器的概率分布与标准 SpanBERT-base QA 模型的概率分布之间的 Kullback-Leibler 散度。损失计算如下:

图片.png

where Pstart (and Pend ) is defined in Eq. (5) and Pstartc and Pendc denote the probability distributions used to predict the start and end positions of answers in the cross-attention model.

其中 Pstart (及 Pend ) 在式 (5) 中定义,PstartcPendc 表示交叉注意力模型中用于预测答案起止位置的概率分布。

4.2 In-batch Negatives

4.2 批次内负样本 (In-batch Negatives)

Eventually, we need to build phrase representations for billions of phrases. Therefore, a bigger challenge is to incorporate more phrases as negatives so the representations can be better discriminated at a larger scale. While Seo et al. (2019) simply sample two negative passages based on question similarity, we use in-batch negatives for our dense phrase representations, which has been shown to be effective in learning dense passage representations before (Karpukhin et al., 2020).

最终,我们需要为数十亿短语构建表征。因此,更大的挑战在于纳入更多负样本短语,以便在更大规模上更好地区分表征。虽然 Seo 等人 (2019) 仅基于问题相似性采样两个负样本段落,但我们采用批内负样本 (in-batch negatives) 来训练密集短语表征,该方法此前已被证明能有效学习密集段落表征 (Karpukhin 等人, 2020)。


Figure 2: Two types of negative samples for the first batch item (qstart1) in a mini-batch of size B=4 and C=3 . Note that the negative samples for the end representations (qendi) are obtained in a similar manner. See §4.2 and §4.3 for more details.

图 2: 在批次大小 B=4C=3 的小批量中,第一个批次项 (qstart1) 的两种负样本类型。注意,结束表示 (qendi) 的负样本以类似方式获得。更多细节参见 §4.2§4.3

As shown in Figure 2 (a), for the i -th example in a mini-batch of size B , we denote the hidden representations of the gold start and end positions $\mathbf{h}{\mathit{s t a r t}}(s^{*})and\mathbf{h}{\mathrm{end}(s^{*})}as\mathbf{g}{i}^{\mathrm{start}}and\mathbf{g}{i}^{\mathrm{end}},aswellasthequestionrepresentationas[\mathbf{q}{i}^{\mathrm{start}},\mathbf{q}{i}^{\mathrm{end}}].Let{\bf G}^{\mathrm{start}},{\bf G}^{\mathrm{end}},{\bf Q}^{\mathrm{start}},{\bf Q}^{\mathrm{end}}betheB\times dmatricesandeachrowcorrespondsto\mathbf{g}{i}^{\mathrm{start}},\mathbf{g}{i}^{\mathrm{end}},\mathbf{q}{i}^{\mathrm{start}},\mathbf{q}{i}^{\mathrm{end}}respectively.Basically,wecantreatallthegoldphrasesfromotherpassagesinthesameminibatchasnegativeexamples.Wecompute\mathbf{S}^{\mathrm{start}}=\mathbf{Q}^{\mathrm{start}}\mathbf{G}^{\mathrm{start}\intercal}and\mathbf{S}^{\mathrm{end}}=\mathbf{Q}^{\mathrm{end}}\mathbf{G}^{\mathrm{end}\top}andtheithrowof\mathbf{S}^{\mathrm{start}}and{\bf S}^{\mathrm{end}}returnBscoreseach,includingapositivescoreandB{-}1negativescores:s_{1}^{\mathrm{start}},\ldots,s_{B}^{\mathrm{start}}andsend,s_{1}^{\mathrm{end}},\ldots,s_{B}^{\mathrm{end}}SimilartoEq.(5),wecancomputethelossfunctionforthei$ -th example as:

如图2(a)所示,对于大小为B的小批量中的第i个样本,我们将黄金起始和结束位置$\mathbf{h}{\mathit{start}}(s^{*})\mathbf{h}{\mathrm{end}(s^{*})}\mathbf{g}{i}^{\mathrm{start}}\mathbf{g}{i}^{\mathrm{end}}[\mathbf{q}{i}^{\mathrm{start}},\mathbf{q}{i}^{\mathrm{end}}]{\bf G}^{\mathrm{start}},{\bf G}^{\mathrm{end}},{\bf Q}^{\mathrm{start}},{\bf Q}^{\mathrm{end}}B\times d\mathbf{g}{i}^{\mathrm{start}},\mathbf{g}{i}^{\mathrm{end}},\mathbf{q}{i}^{\mathrm{start}},\mathbf{q}{i}^{\mathrm{end}}\mathbf{S}^{\mathrm{start}}=\mathbf{Q}^{\mathrm{start}}\mathbf{G}^{\mathrm{start}\intercal}\mathbf{S}^{\mathrm{end}}=\mathbf{Q}^{\mathrm{end}}\mathbf{G}^{\mathrm{end}\top}\mathbf{S}^{\mathrm{start}}{\bf S}^{\mathrm{end}}iBB{-}1s_{1}^{\mathrm{start}},\ldots,s_{B}^{\mathrm{start}}s_{1}^{\mathrm{end}},\ldots,s_{B}^{\mathrm{end}}(5)i$个样本的损失函数为:
图片.png

We also attempted using non-gold phrases from other passages as negatives but did not find a meaningful improvement.

我们还尝试使用其他段落中的非黄金短语作为负样本,但未发现显著改进。

4.3 Pre-batch Negatives

4.3 预批次负样本 (Pre-batch Negatives)

The in-batch negatives usually benefit from a large batch size (Karpukhin et al., 2020). However, it is challenging to further increase batch sizes, as they are bounded by the size of GPU memory. Next, we propose a novel negative sampling method called pre-batch negatives, which can effectively utilize the representations from the preceding C mini-batches (Figure 2 (b)). In each iteration, we maintain a FIFO queue of C mini-batches to cache phrase representations Gstart and Gend . The cached phrase representations are then used as negative samples for the next iteration, providing B×C additional negative samples in total.7

批内负样本通常受益于较大的批次大小 (Karpukhin et al., 2020)。然而,由于受限于GPU内存容量,进一步增加批次大小具有挑战性。为此,我们提出了一种名为预批次负样本的新型负采样方法,该方法能有效利用前C个小批次的表征 (图2(b))。在每次迭代中,我们维护一个包含C个小批次的FIFO队列,用于缓存短语表征GstartGend。这些缓存的短语表征将作为下一次迭代的负样本,总共提供B×C个额外负样本。

These pre-batch negatives are used together with in-batch negatives and the training loss is the same as Eq. (8), except that the gradients are not backpropagated to the cached pre-batch negatives. After warming up the model with in-batch negatives, we simply shift from in-batch negatives (B1 negatives) to in-batch and pre-batch negatives (hence a total number of B×C+B1 negatives). For simplicity, we use Lneg to denote the loss for both inbatch negatives and pre-batch negatives. Since we do not retain the computational graph for pre-batch negatives, the memory consumption of pre-batch negatives is much more manageable while allowing an increase in the number of negative samples.

这些预批次负样本与批次内负样本一起使用,训练损失与公式(8)相同,只是梯度不会反向传播到缓存的预批次负样本。在用批次内负样本预热模型后,我们简单地从批次内负样本(B1个负样本)切换到批次内和预批次负样本(因此总共有B×C+B1个负样本)。为简化起见,我们使用Lneg表示批次内负样本和预批次负样本的损失。由于我们不保留预批次负样本的计算图,预批次负样本的内存消耗更易管理,同时允许增加负样本数量。

4.4 Training Objective

4.4 训练目标

Finally, we optimize all the three losses together, on both annotated reading comprehension examples and generated questions from §4.1 :

最后,我们在标注的阅读理解示例和 §4.1 生成的问题上同时优化所有三个损失函数:
图片.png

where λ1,λ2,λ3 determine the importance of each loss term. We found that λ1=1 , λ2=2 , and λ3= 4 works well in practice. See Table 5 and Table 6 for an ablation study of different components.

其中 λ1,λ2,λ3 决定各项损失的重要性。我们发现实践中 λ1=1λ2=2λ3=4 效果良好。不同组件的消融研究详见表5和表6。

5 Indexing and Search

5 索引与搜索

Indexing After training the phrase encoder Es , we need to encode all the phrases S(D) in the entire English Wikipedia D and store an index of the phrase dump. We segment each document $d_{i}\in\mathcal{D}intoasetofnaturalparagraphs,fromwhichweobtaintokenrepresentationsforeachparagraphusingE_{s}(\cdot).Then,webuildaphrasedump\mathbf{H}=[\mathbf{h}{1},\dots,\mathbf{h}{|\mathcal{W}(\mathcal{D})|}]\in\mathbb{R}^{|\mathcal{W}(\mathcal{D})|\times d}bystackingthetokenrepresentationsfromalltheparagraphsin\mathcal{D}.NotethatthisprocessiscomputationallyexpensiveandtakesabouthundredsofGPUhourswithalargediskfootprint.Toreducethesizeofphrasedump,wefollowandmodifyseveraltechniquesintroducedinSeoetal.(2019)(seeAppendixEfordetails).Afterindexing,wecanusetworowsiandjof\mathbf{H}torepresentadensephraserepresentation[\mathbf{h}{i},\mathbf{h}{j}].Weusefaiss(Johnsonetal.,2017)forbuildingaMIPSindexof\mathbf{H}$ .8

索引构建
训练完短语编码器 Es 后,我们需要对整个英文维基百科 D 中的所有短语 S(D) 进行编码,并存储短语库的索引。我们将每篇文档 $d_{i}\in\mathcal{D}使E_{s}(\cdot)token\mathcal{D}token\mathbf{H}=[\mathbf{h}{1},\dots,\mathbf{h}{|\mathcal{W}(\mathcal{D})|}]\in\mathbb{R}^{|\mathcal{W}(\mathcal{D})|\times d}GPUSeoetal.(2019)(E)\mathbf{H}ij[\mathbf{h}{i},\mathbf{h}{j}]使faiss(Johnsonetal.,2017)\mathbf{H}$ 建立 MIPS 索引。8

Search For a given question q , we can find the answer ˆs as follows:

对于给定的问题 q,我们可以通过以下方式找到答案 ˆs

图片.png

where s(i,j) denotes a phrase with start and end indices as i and j in the index H . We can compute the argmax of Hqstart and Hqend efficiently by performing MIPS over H with qstart and qend . In practice, we search for the top k start and top k end positions separately and perform a constrained search over their end and start positions respectively such that 1ij<i+L|W(D)| .

其中 s(i,j) 表示在索引 H 中起始和结束位置分别为 ij 的短语。我们可以通过使用 qstartqendH 执行 MIPS (最大内积搜索) 来高效计算 HqstartHqend 的 argmax。实际应用中,我们会分别搜索前 k 个起始位置和前 k 个结束位置,并在满足 1ij<i+L|W(D)| 的条件下对它们的结束和起始位置进行约束搜索。

6 Query-side Fine-tuning

6 查询端微调

So far, we have created a phrase dump H that supports efficient MIPS search. In this section, we propose a novel method called query-side fine-tuning by only updating the question encoder Eq to correctly retrieve a desired answer $a^{}foraquestionqgiven\mathbf{H}.Formallyspeaking,weoptimizethemarginalloglikelihoodofthegoldanswera^{}foraquestionq,whichresemblestheweaklysupervisedQAsettinginpreviouswork(Leeetal.,2019;Minetal.,2019).Foreveryquestionq,weretrievetopk$ phrases and minimize the objective:

目前,我们已经创建了一个支持高效MIPS搜索的短语库H。本节提出一种称为查询端微调的新方法,仅通过更新问题编码器Eq,实现在给定H时正确检索问题q对应的目标答案$a^{}qa^{}QA(Leeetal.,2019;Minetal.,2019)qk$个短语并最小化目标函数:

图片.png

where f(s|D,q) is the score of the phrase s (Eq. (2)) and ˜S(q) denotes the top k phrases for q (Eq. (10)). In practice, we use k=100 for all the experiments.

其中 f(s|D,q) 表示短语 s 的得分 (式(2)),˜S(q) 表示查询 q 的前 k 个短语 (式(10))。实际实验中,我们统一采用 k=100 作为参数设置。

There are several advantages for doing this: (1) we find that query-side fine-tuning can reduce the discrepancy between training and inference, and hence improve the final performance substantially (§8). Even with effective negative sampling, the model only sees a small portion of passages compared to the full scale of D and this training objective can effectively fill in the gap. (2) This training strategy allows for transfer learning to unseen domains, without rebuilding the entire phrase index. More specifically, the model is able to quickly adapt to new QA tasks (e.g., Web Questions) when the phrase dump is built using SQuAD or Natural Questions. We also find that this can transfers to non-QA tasks when the query is written in a different format. In §7.3 , we show the possibility of directly using Dense Phrases for slot filling tasks by using a query such as (Michael Jackson, is a singer of, x. ). In this regard, we can view our model as a dense knowledge base that can be accessed by many different types of queries and it is able to return phrase-level knowledge efficiently.

这样做有几个优势:(1) 我们发现查询端微调可以减少训练和推理之间的差异,从而显著提高最终性能(§8)。即使采用有效的负采样,与完整的D规模相比,模型也只能看到一小部分段落,而这种训练目标可以有效填补这一空白。(2) 这种训练策略允许在不重建整个短语索引的情况下,将学习迁移到未见过的领域。更具体地说,当使用SQuAD或Natural Questions构建短语转储时,模型能够快速适应新的QA任务(例如Web Questions)。我们还发现,当查询以不同格式编写时,这种方法也可以迁移到非QA任务。在§7.3中,我们展示了通过使用诸如(Michael Jackson, is a singer of, x.)这样的查询,直接使用Dense Phrases进行槽填充任务的可能性。在这方面,我们可以将我们的模型视为一个密集知识库,可以通过多种不同类型的查询进行访问,并且能够高效地返回短语级知识。

7 Experiments

7 实验

7.1Setup

7.1 设置

Datasets. We use two reading comprehension datasets: SQuAD (Rajpurkar et al., 2016) and Natural Questions (NQ) (Kwiatkowski et al., 2019) to learn phrase representations, in which a single gold passage is provided for each question. For the opendomain QA experiments, we evaluate our approach on five popular open-domain QA datasets: Natural Questions, Web Questions (WQ) (Berant et al., 2013), Curate dT REC (TREC) (Baudis and Sedivy, 2015), TriviaQA (TQA) (Joshi et al., 2017), and SQuAD. Note that we only use SQuAD and/or NQ to build the phrase index and perform query-side fine-tuning (§6) for other datasets.

数据集。我们使用两个阅读理解数据集:SQuAD (Rajpurkar et al., 2016) 和 Natural Questions (NQ) (Kwiatkowski et al., 2019) 来学习短语表示,其中每个问题都提供了一个标准段落。对于开放域问答实验,我们在五个流行的开放域问答数据集上评估我们的方法:Natural Questions、Web Questions (WQ) (Berant et al., 2013)、CuratedTREC (TREC) (Baudis and Sedivy, 2015)、TriviaQA (TQA) (Joshi et al., 2017) 和 SQuAD。请注意,我们仅使用 SQuAD 和/或 NQ 来构建短语索引并为其他数据集执行查询端微调 (§6)。

We also evaluate our model on two slot filling tasks, to show how to adapt our Dense Phrases for other knowledge-intensive NLP tasks. We focus on using two slot filling datasets from the KILT benchmark (Petroni et al., 2021): T-REx (Elsahar et al., 2018) and zero-shot relation extraction (Levy et al., 2017). Each query is provided in the form of “{subject entity} [SEP] {relation}" and the answer is the object entity. Appendix C provides the statistics of all the datasets.

我们还评估了模型在两个槽填充任务上的表现,以展示如何将Dense Phrases适配到其他知识密集型NLP任务中。我们重点使用了来自KILT基准测试(Petroni等人,2021)的两个槽填充数据集:T-REx(Elsahar等人,2018)和零样本关系抽取(Levy等人,2017)。每个查询以"{主体实体}[SEP]{关系}"的形式提供,答案为目标实体。附录C提供了所有数据集的统计信息。

Implementation details. We denote the training datasets used for reading comprehension (Eq. (9)) as $\mathcal{C}{\mathrm{phrase}}.ForopendomainQA,wetraintwoversionsofphraseencoders,eachofwhicharetrainedon\ensuremath{\mathcal{C}}{\mathrm{phrase}}={\mathrm{SQuAD}}and{\mathrm{NQ},\mathrm{SQuAD}},respectively.WebuildthephrasedumpHforthe20181220WikipediasnapshotandperformquerysidefinetuningoneachdatasetusingEq.(11).Forslotfilling,weusethesamephrasedumpforopendomainQA,\mathcal{C}_{\mathrm{phrase}}={\mathrm{NQ},\mathrm{SQuAD}}$ and perform query-side fine-tuning on randomly sampled 5K or 10K training examples to see how rapidly our model adapts to the new query types. See Appendix D for details on the hyper parameters and Appendix A for an analysis of computational cost.

实现细节。我们将用于阅读理解 (式 (9)) 的训练数据集记为 $\mathcal{C}{\mathrm{phrase}}\ensuremath{\mathcal{C}}{\mathrm{phrase}}={\mathrm{SQuAD}}{\mathrm{NQ},\mathrm{SQuAD}}20181220H使(11)\mathcal{C}_{\mathrm{phrase}}={\mathrm{NQ},\mathrm{SQuAD}}$,并在随机采样的5K或10K训练样本上进行查询端微调,以观察模型适应新查询类型的速度。超参数详情参见附录D,计算成本分析参见附录A。

Table 2: Reading comprehension results, evaluated on the development sets of SQuAD and Natural Questions. Underlined numbers are estimated from the figures from the original papers. †: BERT-large model.

表 2: 阅读理解结果,在 SQuAD 和 Natural Questions 的开发集上评估。带下划线的数字是根据原始论文中的图表估算的。†: BERT-large 模型。

模型 SQuAD NQ (Long)
EM F1 EM F1
Query-Dependent
BERT-base 80.8 88.5 69.9 78.2
SpanBERT-base 85.7 92.2 73.2 81.0
Query-Agnostic
DilBERT (Siblini et al., 2020) 63.0 72.0
DeFormer (Ca0 et al., 2020) 72.1
DenSPIt 73.6 81.7 68.2 76.1
DenSPI+ Sparc 76.4 84.8
DensePhrases (ours) 78.3 86.3 71.9 79.6

7.2 Experiments: Question Answering

7.2 实验:问答任务

Reading comprehension. In order to show the effectiveness of our phrase representations, we first evaluate our model in the reading comprehension setting for SQuAD and NQ and report its performance with other query-agnostic models (Eq. (9) without query-side fine-tuning). This problem was originally formulated by Seo et al. (2018) as the phrase-indexed question answering (PIQA) task.

阅读理解。为了展示我们短语表示的有效性,我们首先在SQuAD和NQ的阅读理解设置下评估模型性能,并与其他查询无关模型(公式(9) 未经查询端微调)进行对比。该问题最初由Seo等人 (2018) 提出,定义为短语索引问答(PIQA)任务。

Compared to previous query-agnostic models, our model achieves the best performance of 78.3 EM on SQuAD by improving the previous phrase retrieval model (DenSPI) by 4.7 (Table 2). Al- though it is still behind cross-attention models, the gap has been greatly reduced and serves as a strong starting point for the open-domain QA model.

与之前的查询无关模型相比,我们的模型通过将先前的短语检索模型 (DenSPI) 提升 4.7 ,在 SQuAD 上实现了 78.3 EM 的最佳性能 (表 2)。尽管仍落后于交叉注意力模型,但差距已大幅缩小,为开放域问答模型奠定了坚实基础。

Open-domain QA. Experimental results on open-domain QA are summarized in Table 3. Without any sparse representations, Dense Phrases outperforms previous phrase retrieval models by a large margin and achieves a 15 absolute improvement on all datasets except SQuAD. Training the model of Lee et al. (2020) on Cphrase= NQ,SQuAD only increases the result from 14.5 to 16.5 on NQ, demonstrating that it does not suffice to simply add more datasets for training phrase representations. Our performance is also competitive with recent retriever-reader models (Karpukhin et al., 2020), while running much faster during inference (Table 1).

开放域问答。开放域问答的实验结果总结在表 3 中。在不使用任何稀疏表示的情况下,Dense Phrases 大幅优于之前的短语检索模型,在除 SQuAD 外的所有数据集上实现了 15 的绝对提升。在 Cphrase= NQ,SQuAD 上训练 Lee 等人 (2020) 的模型仅将 NQ 的结果从 14.5 提高到 16.5,这表明仅增加更多训练数据集不足以提升短语表示效果。我们的性能也与最近的检索器-阅读器模型 (Karpukhin 等人, 2020) 相当,同时在推理时运行速度更快 (表 1)。

Table 3: Open-domain QA results. We report exact match (EM) on the test sets. We also show the additional training or pre-training datasets for learning the retriever models $(\mathcal{C}{\mathrm{retr}})andcreatingthephrasedump(\mathcal{C}{\mathrm{phrase}})$ . ∗: no supervision using target training data (zero-shot). †: unlabeled data used for extra pre-training.

表 3: 开放域问答结果。我们报告测试集上的精确匹配 (EM) 分数,同时展示用于训练检索器模型 $(\mathcal{C}{\mathrm{retr}})(\mathcal{C}{\mathrm{phrase}})$ 的额外训练或预训练数据集。∗: 未使用目标训练数据进行监督 (零样本)。†: 用于额外预训练的无标注数据。

模型 NQ WQ TREC TQA SQuAD
检索器-阅读器 Cretr: (预)训练
DrQA (Chen et al., 2017) 20.7 25.4 29.8
BERT+BM25 (Lee et al., 2019) 26.5 17.7 21.3 47.1 33.2
ORQA (Lee et al., 2019) {Wiki.}† 33.3 36.4 30.1 45.0 20.2
REALMNews (Guu et al., 2020) {Wiki., CC-News} 40.4 40.7 42.9
DPR-multi (Karpukhin et al., 2020) {NQ, WQ, TREC, TQA} 41.5 42.4 49.4 56.8 24.1
短语检索 Cphrase: 训练
DenSPI (Seo et al., 2019) {SQuAD} 8.1* 11.1* 31.6* 30.7* 36.2
DenSPI + Sparc (Lee et al., 2020) {SQuAD} 14.5* 17.3* 35.7* 34.4* 40.7
DenSPI + Sparc (Lee et al., 2020) {NQ, SQuAD} 16.5
DensePhrases (本工作) {SQuAD} 31.2 36.3 50.3 53.6 39.4
DensePhrases (本工作) {NQ, SQuAD} 40.9 37.5 51.0 50.7 38.0
模型 T-REx Acc T-REx F1 ZsRE Acc ZsRE F1
DPR+BERT 4.47 27.09
DPR+BART RAG 11.12 11.41 18.91 20.32
23.12 23.94 36.83 39.91
DensePhrases5K 25.32 29.76 40.39 45.89
DensePhrases 10K 27.84 32.34 41.34 46.79

Table 4: Slot filling results on the test sets of T-REx and Zero shot RE (ZsRE) in the KILT benchmark. We report KILT-AC and KILT-F1 (denoted as Acc and FI in the table), which consider both span-level accuracy and correct retrieval of evidence documents.

表 4: KILT基准测试中T-REx和零样本关系抽取(ZsRE)测试集的槽填充结果。我们报告了KILT-AC和KILT-F1(表中记为Acc和FI),这两个指标同时考虑了跨度级准确率和证据文档的正确检索。

7.3 Experiments: Slot Filling

7.3 实验:槽填充

Table 4 summarizes the results on the two slot filling datasets, along with the baseline scores provided by Petroni et al. (2021). The only extractive baseline is DPR+BERT , which performs poorly in zero-shot relation extraction. On the other hand, our model achieves competitive performance on all datasets an

阅读全文(2积分)