Thursday, April 23, 2020

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer


This video explores the T5 large-scale study on Transfer Learning. This paper takes apart many different factors of the Pre-Training then Fine-Tuning pipeline for NLP. This involves Auto-Regressive Language Modeling vs. BERT-Style Masked Language Modeling and XLNet-style shuffling, as well as the impact of dataset composition, size, and how to best use more computation. Thanks for watching and please check out Machine Learning Street Talk where Tim Scarfe, Yannic Kilcher and I discuss this paper! Machine Learning Street Talk: https://www.youtube.com/channel/UCMLtBahI5DMrt0NPvDSoIRQ Paper Links: T5: https://ift.tt/2pcuaXx Google AI Blog Post on T5: https://ift.tt/2SV4VF9 Train Large, Then Compress: https://ift.tt/3awfC74 Scaling Laws for Neural Language Models: https://ift.tt/2yzOOVY The Illustrated Transformer: https://ift.tt/2NLJXmf ELECTRA: https://ift.tt/2RZsM5S Transformer-XL: https://ift.tt/2LIaXXb Reformer: The Efficient Transformer: https://ift.tt/378kuhh The Evolved Transformer: https://ift.tt/2IAdYFw DistilBERT: https://ift.tt/2Y2cZa2 How to generate text (HIGHLY RECOMMEND): https://ift.tt/3d9QC7P Tokenizers: https://ift.tt/2vpu7Kx Thanks for watching! Please Subscribe!

No comments:

Post a Comment