Thursday, August 20, 2020

What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study (Paper Explained)


#ai #research #machinelearning Online Reinforcement Learning is a flourishing field with countless methods for practitioners to choose from. However, each of those methods comes with a plethora of hyperparameter choices. This paper builds a unified framework for five continuous control tasks and investigates in a large-scale study the effects of these choices. As a result, they come up with a set of recommendations for future research and applications. OUTLINE: 0:00 - Intro & Overview 3:55 - Parameterized Agents 7:00 - Unified Online RL and Parameter Choices 14:10 - Policy Loss 16:40 - Network Architecture 20:25 - Initial Policy 24:20 - Normalization & Clipping 26:30 - Advantage Estimation 28:55 - Training Setup 33:05 - Timestep Handling 34:10 - Optimizers 35:05 - Regularization 36:10 - Conclusion & Comments Paper: https://ift.tt/2BWAs3B Abstract: In recent years, on-policy reinforcement learning (RL) has been successfully applied to many different continuous control tasks. While RL algorithms are often conceptually simple, their state-of-the-art implementations take numerous low- and high-level design decisions that strongly affect the performance of the resulting agents. Those choices are usually not extensively discussed in the literature, leading to discrepancy between published descriptions of algorithms and their implementations. This makes it hard to attribute progress in RL and slows down overall progress (Engstrom'20). As a step towards filling that gap, we implement over 50 such "choices" in a unified on-policy RL framework, allowing us to investigate their impact in a large-scale empirical study. We train over 250'000 agents in five continuous control environments of different complexity and provide insights and practical recommendations for on-policy training of RL agents. Authors: Marcin Andrychowicz, Anton Raichuk, Piotr Stańczyk, Manu Orsini, Sertan Girgin, Raphael Marinier, Léonard Hussenot, Matthieu Geist, Olivier Pietquin, Marcin Michalski, Sylvain Gelly, Olivier Bachem Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/2Zo6XRA If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

No comments:

Post a Comment