Wednesday, October 7, 2020

Retrieval-Augmented Generation (RAG)


This video explains the Retrieval-Augmented Generation (RAG) model! This approach combines Dense Passage Retrieval with a Seq2Seq BART generator. This is tested out on knowledge intensive tasks like open-domain QA, jeopardy question generation, and FEVER fact verification. This looks like a really interesting paradigm for building language models that produce factually accurate generations! Thanks for watching! Please Subscribe! Paper Links: Original Paper: https://ift.tt/33bHTyO FB Blog Post (Animation used in Intro): https://ift.tt/3kVI4og HuggingFace RAG description: https://ift.tt/3iIO3eh Billion-scale similarity search with GPUs: https://ift.tt/2m8YPRc Language Models as Knowledge Bases? https://ift.tt/2MTGupS REALM: Retrieval-Augmented Language Models: https://ift.tt/2PeBxr4 Dense Passage Retrieval: https://ift.tt/3npz7FC FEVER: https://ift.tt/2F9QadF Natural Questions: https://ift.tt/3izTD2B TriviaQA: https://ift.tt/2SAI2WE MS MARCO: https://ift.tt/2GrcNuR Thanks for watching! Time Stamps 0:00 Introduction 2:05 Limitations of Language Models 4:10 Algorithm Walkthrough 5:48 Dense Passage Retrieval 7:44 RAG-Token vs. RAG-Sequence 10:47 Off-the-Shelf Models 11:54 Experiment Datasets 15:03 Results vs. T5 16:16 BART vs. RAG - Jeopardy Questions 17:20 Impact of Retrieved Documents zi 18:53 Ablation Study 20:25 Retrieval Collapse 21:10 Knowledge Graphs as Non-Parametric Memory 21:45 Can we learn better representations for the Document Index? 22:12 How will Efficient Transformers impact this?

No comments:

Post a Comment