Resource of free step by step video how to guides to get you started with machine learning.
Saturday, July 4, 2020
Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention (Paper Explained)
#ai #attention #transformer #deeplearning Transformers are famous for two things: Their superior performance and their insane requirements of compute and memory. This paper reformulates the attention mechanism in terms of kernel functions and obtains a linear formulation, which reduces these requirements. Surprisingly, this formulation also surfaces an interesting connection between autoregressive transformers and RNNs. OUTLINE: 0:00 - Intro & Overview 1:35 - Softmax Attention & Transformers 8:40 - Quadratic Complexity of Softmax Attention 9:40 - Generalized Attention Mechanism 13:45 - Kernels 20:40 - Linear Attention 25:20 - Experiments 28:30 - Intuition on Linear Attention 33:55 - Connecting Autoregressive Transformers and RNNs 41:30 - Caveats with the RNN connection 46:00 - More Results & Conclusion Paper: https://ift.tt/3g9qKtf Website: https://ift.tt/2YO6lV3 Code: https://ift.tt/38rVYsZ My Video on Attention: https://youtu.be/iDulhoQ2pro My Video on BERT: https://youtu.be/-9evrZnBorM Abstract: Transformers achieve remarkable performance in several tasks but due to their quadratic complexity, with respect to the input's length, they are prohibitively slow for very long sequences. To address this limitation, we express the self-attention as a linear dot-product of kernel feature maps and make use of the associativity property of matrix products to reduce the complexity from (N2) to (N), where N is the sequence length. We show that this formulation permits an iterative implementation that dramatically accelerates autoregressive transformers and reveals their relationship to recurrent neural networks. Our linear transformers achieve similar performance to vanilla transformers and they are up to 4000x faster on autoregressive prediction of very long sequences. Authors: Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, François Fleuret Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB
Subscribe to:
Post Comments (Atom)
-
Using GPUs in TensorFlow, TensorBoard in notebooks, finding new datasets, & more! (#AskTensorFlow) [Collection] In a special live ep...
-
JavaやC++で作成された具体的なルールに従って動く従来のプログラムと違い、機械学習はデータからルール自体を推測するシステムです。機械学習は具体的にどのようなコードで構成されているでしょうか? 機械学習ゼロからヒーローへの第一部ではそのような疑問に応えるため、ガイドのチャー...
-
#deeplearning #noether #symmetries This video includes an interview with first author Ferran Alet! Encoding inductive biases has been a lo...
-
#ai #attention #transformer #deeplearning Transformers are famous for two things: Their superior performance and their insane requirements...
-
Machine Learning in Python using Visual Studio | Getting Started Python is a popular programming language. It was created by Guido van Ross...
-
The video provides an overview of the use of AI and machine learning in education, specifically in the context of building an AI tool for ma...
-
K Nearest Neighbors Application - Practical Machine Learning Tutorial with Python p.14 [Collection] In the last part we introduced Class...
-
STUMPY is a robust and scalable Python library for computing a matrix profile, which can create valuable insights about our time series. STU...
-
Linear Algebra Tutorial on the Determinant of a Matrix 🤖Welcome to our Linear Algebra for AI tutorial! This tutorial is designed for both...
-
📺For more content like this, follow me on: 🔗YouTube: https://rb.gy/x3zdss 🔗Instagram: https://rb.gy/2exe75 🔗Linkedin: https://rb.gy/iuhd...
No comments:
Post a Comment