Resource of free step by step video how to guides to get you started with machine learning.
Tuesday, November 3, 2020
Towards RL that scales - Victor Campos - UPC TelecomBCN Barcelona 2020
Course site: https://ift.tt/34VAJ2w Towards RL that scales: Autonomous acquisition and transfer of knowledge ABSTRACT Designing agents that acquire knowledge autonomously and use it to solve new tasks efficiently is an important challenge in reinforcement learning (RL). Unsupervised learning provides a useful paradigm for autonomous acquisition of task-agnostic knowledge. In supervised settings, representations discovered through unsupervised pre-training offer important benefits when transferred to downstream tasks. In this talk, we discuss whether such techniques are well suited for RL. While reviewing recently proposed approaches for unsupervised pre-training of RL agents, we will gain insight on the key aspects that enable autonomous acquisition and efficient transfer of knowledge in our agents. BIO Víctor Campos holds a BsC and a MsC degrees in Electrical Engineering from Universitat Politècnica de Catalunya. He is currently pursuing his PhD on the intersection between Deep Learning and High Performance Computing at the Barcelona Supercomputing Center, supported by Obra Social "la Caixa" through La Caixa-Severo Ochoa International Doctoral Fellowship program. He has done internships at DFKI (2016), Columbia University (2017), Salesforce Research (2019), and Deepmind (2020). His research interests focus on scaling up deep learning and reinforcement learning methods to leverage compute and data.
Subscribe to:
Post Comments (Atom)
-
JavaやC++で作成された具体的なルールに従って動く従来のプログラムと違い、機械学習はデータからルール自体を推測するシステムです。機械学習は具体的にどのようなコードで構成されているでしょうか? 機械学習ゼロからヒーローへの第一部ではそのような疑問に応えるため、ガイドのチャー...
-
Using GPUs in TensorFlow, TensorBoard in notebooks, finding new datasets, & more! (#AskTensorFlow) [Collection] In a special live ep...
-
#minecraft #neuralnetwork #backpropagation I built an analog neural network in vanilla Minecraft without any mods or command blocks. The n...
-
Using More Data - Deep Learning with Neural Networks and TensorFlow part 8 [Collection] Welcome to part eight of the Deep Learning with ...
-
Linear Algebra Tutorial on the Determinant of a Matrix 🤖Welcome to our Linear Algebra for AI tutorial! This tutorial is designed for both...
-
STUMPY is a robust and scalable Python library for computing a matrix profile, which can create valuable insights about our time series. STU...
-
❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers 📝 The paper "Alias-Free GAN" is available here: h...
-
Why are humans so good at video games? Maybe it's because a lot of games are designed with humans in mind. What happens if we change t...
-
Visual scenes are often comprised of sets of independent objects. Yet, current vision models make no assumptions about the nature of the p...
-
#ai #attention #transformer #deeplearning Transformers are famous for two things: Their superior performance and their insane requirements...
No comments:
Post a Comment