Resource of free step by step video how to guides to get you started with machine learning.
Saturday, July 4, 2020
Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention (Paper Explained)
#ai #attention #transformer #deeplearning Transformers are famous for two things: Their superior performance and their insane requirements of compute and memory. This paper reformulates the attention mechanism in terms of kernel functions and obtains a linear formulation, which reduces these requirements. Surprisingly, this formulation also surfaces an interesting connection between autoregressive transformers and RNNs. OUTLINE: 0:00 - Intro & Overview 1:35 - Softmax Attention & Transformers 8:40 - Quadratic Complexity of Softmax Attention 9:40 - Generalized Attention Mechanism 13:45 - Kernels 20:40 - Linear Attention 25:20 - Experiments 28:30 - Intuition on Linear Attention 33:55 - Connecting Autoregressive Transformers and RNNs 41:30 - Caveats with the RNN connection 46:00 - More Results & Conclusion Paper: https://ift.tt/3g9qKtf Website: https://ift.tt/2YO6lV3 Code: https://ift.tt/38rVYsZ My Video on Attention: https://youtu.be/iDulhoQ2pro My Video on BERT: https://youtu.be/-9evrZnBorM Abstract: Transformers achieve remarkable performance in several tasks but due to their quadratic complexity, with respect to the input's length, they are prohibitively slow for very long sequences. To address this limitation, we express the self-attention as a linear dot-product of kernel feature maps and make use of the associativity property of matrix products to reduce the complexity from (N2) to (N), where N is the sequence length. We show that this formulation permits an iterative implementation that dramatically accelerates autoregressive transformers and reveals their relationship to recurrent neural networks. Our linear transformers achieve similar performance to vanilla transformers and they are up to 4000x faster on autoregressive prediction of very long sequences. Authors: Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, François Fleuret Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB
Subscribe to:
Post Comments (Atom)
-
Using GPUs in TensorFlow, TensorBoard in notebooks, finding new datasets, & more! (#AskTensorFlow) [Collection] In a special live ep...
-
JavaやC++で作成された具体的なルールに従って動く従来のプログラムと違い、機械学習はデータからルール自体を推測するシステムです。機械学習は具体的にどのようなコードで構成されているでしょうか? 機械学習ゼロからヒーローへの第一部ではそのような疑問に応えるため、ガイドのチャー...
-
#deeplearning #noether #symmetries This video includes an interview with first author Ferran Alet! Encoding inductive biases has been a lo...
-
How to Do PS2 Filter (Tiktok PS2 Filter Tutorial), AI tiktok filter Create your own PS2 Filter photos with this simple guide! 🎮📸 Please...
-
#ai #attention #transformer #deeplearning Transformers are famous for two things: Their superior performance and their insane requirements...
-
K Nearest Neighbors Application - Practical Machine Learning Tutorial with Python p.14 [Collection] In the last part we introduced Class...
-
Machine Learning in Python using Visual Studio | Getting Started Python is a popular programming language. It was created by Guido van Ross...
-
We Talked To Sophia — The AI Robot That Once Said It Would 'Destroy Humans' [Collection] This AI robot once said it wanted to de...
-
Programming R Squared - Practical Machine Learning Tutorial with Python p.11 [Collection] Now that we know what we're looking for, l...
-
#minecraft #neuralnetwork #backpropagation I built an analog neural network in vanilla Minecraft without any mods or command blocks. The n...
No comments:
Post a Comment