Generative Multi-hop Retrieval

Abstract

Multi-hop retrieval is the task of retrieving a series of multiple documents that together provide sufficient evidence to answer a natural language query. A common practice for text retrieval is to use an encoder to map the documents and the query to a common vector space and perform a nearest neighbor search (NNS); multi-hop retrieval also often adopts the same paradigm, usually with a modification of iteratively reformulating the query vector so that it can retrieve different documents at each hop. However, the inherent limitations of such a bi-encoder approach worsen in the multi-hop setting. As the number of hops increases, the reformulated query increasingly depends on the documents retrieved in its previous hops, which further tightens the embedding bottleneck of the query vector and becomes more prone to error propagation. In this paper, we focus on alleviating these limitations of the bi-encoder approach in multi-hop settings by formulating the problem in a fully generative way. We propose an encoder-decoder model that performs multi-hop retrieval by simply generating the entire text sequences of the retrieval targets, which means the query and the documents interact in the language model’s parametric space rather than L2 or inner product space as in the bi-encoder approach. Our approach, Generative Multi-hop Retrieval (GMR), consistently achieves comparable or higher performance than bi-encoder models in five datasets while demonstrating superior GPU memory and storage footprint.

Publication
The 2022 Conference on Empirical Methods in Natural Language Processing

Related