Friday, March 6, 2020

Train Large, Then Compress


This video explains a new study on the best way to use a limited compute budget when training Natural Language Processing tasks. They show that Large models reach a lower error faster than smaller models and stopping training early with large models achieves better performance than longer training with smaller models. These larger models come with an inference bottleneck, it takes longer to make predictions and costs more to store these weights. The authors alleviate the inference bottleneck by showing that these larger models are robust to compression techniques like quantization and pruning! Thanks for watching, Please Subscribe! Paper Links: Train Large, Then Compress: https://ift.tt/3awfC74 BAIR Blog Post: https://ift.tt/2ImJYNl What is Gradient Accumulation in Deep Learning? https://ift.tt/30M4f7o Transfer Learning in NLP: https://ift.tt/2VPiWpR SST: https://ift.tt/2t56jGq MNLI: https://ift.tt/2PW2HUe The Lottery Ticket Hypothesis: https://ift.tt/2PTd4pv GPT: https://ift.tt/2HeACni

No comments:

Post a Comment