Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream NLP tasks. However, their ability to access and precisely manipulate knowledge is still limited, and hence on knowledge-intensive tasks, their performance lags behind task-specific architectures. Additionally, providing provenance for their decisions and updating their world knowledge remain open research problems. Pre-trained models with a differentiable access mechanism to explicit non-parametric memory can overcome this issue, but have so far been only investigated for extractive downstream tasks. We explore a general-purpose fine-tuning recipe for retrieval-augmented generation (RAG) — models which combine pre-trained parametric and non-parametric memory for language generation. We introduce RAG models where the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. We compare two RAG formulations, one which conditions on the same retrieved passages across the whole generated sequence, the other can use different passages per token. We fine-tune and evaluate our models on a wide range of knowledge-intensive NLP tasks and set the state-of-the-art on three open domain QA tasks, outperforming parametric seq2seq models and task-specific retrieve-and-extract architectures. For language generation tasks, we find that RAG models generate more specific, diverse and factual language than a state-of-the-art parametric-only seq2seq baseline. https://arxiv.org/abs/2005.11401
(DeepL) Large, pre-trained language models have been shown to store factual knowledge in their parameters and achieve state-of-the-art results when fine-tuned in downstream natural language processing tasks. However, their ability to access and accurately manipulate knowledge is still limited, so they perform poorly in knowledge-intensive tasks compared to task-specific architectures. Furthermore, providing empiricity to its decisions and updating global knowledge remains an open research problem. Pre-trained models with differentiable access mechanisms to explicit nonparametric memory can overcome this problem, but so far have only been studied for extractive downstream tasks. In this study, we explore a generic fine-tuning recipe for retrieval augmented generation (RAG), a model that combines pre-trained parametric and nonparametric memory for language generation. We introduce the RAG model where the parametric memory is a pre-trained seq2seq model and the nonparametric memory is a dense vector index of Wikipedia accessed by a pre-trained neural retriever We compare two formulations of RAG: one conditioned on the same retrieved sentences across the entire generated sequence, while the other allows for different sentences per token. It also outperforms the parametric seq2seq model and the task-specific retrieve-and-extract architecture in the three open-domain QA tasks, and is state-of-the-art. In the language generation task, we find that the RAG model generates more specific, diverse, and factual language than the state-of-the-art parametric seq2seq-only baseline.
relevance - Combine Searches
This page is auto-translated from [/nishio/Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://scrapbox.io/nishio/Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks) using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I’m very happy to spread my thought to non-Japanese readers.