Resource of free step by step video how to guides to get you started with machine learning.
Friday, March 6, 2020
Train Large, Then Compress
This video explains a new study on the best way to use a limited compute budget when training Natural Language Processing tasks. They show that Large models reach a lower error faster than smaller models and stopping training early with large models achieves better performance than longer training with smaller models. These larger models come with an inference bottleneck, it takes longer to make predictions and costs more to store these weights. The authors alleviate the inference bottleneck by showing that these larger models are robust to compression techniques like quantization and pruning! Thanks for watching, Please Subscribe! Paper Links: Train Large, Then Compress: https://ift.tt/3awfC74 BAIR Blog Post: https://ift.tt/2ImJYNl What is Gradient Accumulation in Deep Learning? https://ift.tt/30M4f7o Transfer Learning in NLP: https://ift.tt/2VPiWpR SST: https://ift.tt/2t56jGq MNLI: https://ift.tt/2PW2HUe The Lottery Ticket Hypothesis: https://ift.tt/2PTd4pv GPT: https://ift.tt/2HeACni
Subscribe to:
Post Comments (Atom)
-
JavaやC++で作成された具体的なルールに従って動く従来のプログラムと違い、機械学習はデータからルール自体を推測するシステムです。機械学習は具体的にどのようなコードで構成されているでしょうか? 機械学習ゼロからヒーローへの第一部ではそのような疑問に応えるため、ガイドのチャー...
-
Using GPUs in TensorFlow, TensorBoard in notebooks, finding new datasets, & more! (#AskTensorFlow) [Collection] In a special live ep...
-
#minecraft #neuralnetwork #backpropagation I built an analog neural network in vanilla Minecraft without any mods or command blocks. The n...
-
Using More Data - Deep Learning with Neural Networks and TensorFlow part 8 [Collection] Welcome to part eight of the Deep Learning with ...
-
Linear Algebra Tutorial on the Determinant of a Matrix 🤖Welcome to our Linear Algebra for AI tutorial! This tutorial is designed for both...
-
STUMPY is a robust and scalable Python library for computing a matrix profile, which can create valuable insights about our time series. STU...
-
❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers 📝 The paper "Alias-Free GAN" is available here: h...
-
Why are humans so good at video games? Maybe it's because a lot of games are designed with humans in mind. What happens if we change t...
-
Visual scenes are often comprised of sets of independent objects. Yet, current vision models make no assumptions about the nature of the p...
-
#ai #attention #transformer #deeplearning Transformers are famous for two things: Their superior performance and their insane requirements...
No comments:
Post a Comment