Wednesday, April 29, 2020

First Return Then Explore


This video explores "First Return Then Explore", the latest advancement of the Go-Explore algorithm. This paper introduces Policy-based Go-Explore where the agent is trained to return to the frontier of explored states, rather than just resetting the simulator state. This helps with stochasticity during training, removes the need for a second robustify phase, and provides a better policy for exploration from the most promising state. Thanks for watching! Please Subscribe! Paper Links: First return then explore: https://ift.tt/3aKYQAZ Go-Explore: https://ift.tt/2W9ieCm The Ingredients of Real-World RL: https://ift.tt/2VHwuTV Domain Randomization for Sim2Real Transfer: https://ift.tt/2Gkp3tA Beyond Domain Randomization: https://ift.tt/2YiVbri Jeff Clune at Rework on Go-Explore: https://www.youtube.com/watch?v=SWcuTgk2di8&t=862s World models: https://ift.tt/2IYv5zG Solving Rubik's Cube with a Robot Hand: https://ift.tt/2Mk3yMZ Exploration based language learning for text-based games: https://ift.tt/35foFYw Abandoning Objectives: https://ift.tt/2yVYXMy Specification Gaming: https://ift.tt/2RWPUle Upside-Down RL: https://ift.tt/2YjsYAW Chip Design with Deep Reinforcement Learning: https://ift.tt/3asCKTr Thanks for watching! Please Subscribe!

No comments:

Post a Comment