Sunday, November 29, 2020

Predictive Coding Approximates Backprop along Arbitrary Computation Graphs (Paper Explained)


#ai #biology #neuroscience Backpropagation is the workhorse of modern deep learning and a core component of most frameworks, but it has long been known that it is not biologically plausible, driving a divide between neuroscience and machine learning. This paper shows that Predictive Coding, a much more biologically plausible algorithm, can approximate Backpropagation for any computation graph, which they verify experimentally by building and training CNNs and LSTMs using Predictive Coding. This suggests that the brain and deep neural networks could be much more similar than previously believed. OUTLINE: 0:00 - Intro & Overview 3:00 - Backpropagation & Biology 7:40 - Experimental Results 8:40 - Predictive Coding 29:00 - Pseudocode 32:10 - Predictive Coding approximates Backprop 35:00 - Hebbian Updates 36:35 - Code Walkthrough 46:30 - Conclusion & Comments Paper: https://ift.tt/3mndCnZ Code: https://ift.tt/35bLGvP Abstract: Backpropagation of error (backprop) is a powerful algorithm for training machine learning architectures through end-to-end differentiation. However, backprop is often criticised for lacking biological plausibility. Recently, it has been shown that backprop in multilayer-perceptrons (MLPs) can be approximated using predictive coding, a biologically-plausible process theory of cortical computation which relies only on local and Hebbian updates. The power of backprop, however, lies not in its instantiation in MLPs, but rather in the concept of automatic differentiation which allows for the optimisation of any differentiable program expressed as a computation graph. Here, we demonstrate that predictive coding converges asymptotically (and in practice rapidly) to exact backprop gradients on arbitrary computation graphs using only local learning rules. We apply this result to develop a straightforward strategy to translate core machine learning architectures into their predictive coding equivalents. We construct predictive coding CNNs, RNNs, and the more complex LSTMs, which include a non-layer-like branching internal graph structure and multiplicative interactions. Our models perform equivalently to backprop on challenging machine learning benchmarks, while utilising only local and (mostly) Hebbian plasticity. Our method raises the potential that standard machine learning algorithms could in principle be directly implemented in neural circuitry, and may also contribute to the development of completely distributed neuromorphic architectures. Authors: Beren Millidge, Alexander Tschantz, Christopher L. Buckley Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/2Zo6XRA If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

No comments:

Post a Comment