Resource of free step by step video how to guides to get you started with machine learning.
Monday, June 22, 2020
DeepMind x UCL | Deep Learning Lectures | 10/12 | Unsupervised Representation Learning
Unsupervised learning is one of the three major branches of machine learning (along with supervised learning and reinforcement learning). It is also arguably the least developed branch. Its goal is to find a parsimonious description of the input data by uncovering and exploiting its hidden structures. This is presumed to be more reminiscent of how the brain learns compared to supervised learning. Furthermore, it is hypothesised that the representations discovered through unsupervised learning may alleviate many known problems with deep supervised and reinforcement learning. However, lacking an explicit ground truth goal to optimise towards, developmental progress in unsupervised learning has been slow. In this talk DeepMind Research Scientist Irina Higgins and DeepMind Research Engineer Mihaela Rosca give an overview the historical role of unsupervised representation learning and difficulties with developing and evaluating such algorithms. They then take a multidisciplinary approach to think about what might make a good representation and why, before doing a broad overview of the current state of the art approaches to unsupervised representation learning. Download the slides here: https://ift.tt/3fNRyPS Find out more about how DeepMind increases access to science here: https://ift.tt/3dnjF7D Speaker Bios: Irina is a research scientist at DeepMind, where she works in the Frontiers team. Her work aims to bring together insights from the fields of neuroscience and physics to advance general artificial intelligence through improved representation learning. Before joining DeepMind, Irina was a British Psychological Society Undergraduate Award winner for her achievements as an undergraduate student in Experimental Psychology at Westminster University, followed by a DPhil at the Oxford Centre for Computational Neuroscience and Artificial Intelligence, where she focused on understanding the computational principles underlying speech processing in the auditory brain. During her DPhil, Irina also worked on developing poker AI, applying machine learning in the finance sector, and working on speech recognition at Google Research. Mihaela Rosca is a Research Engineer at DeepMind and PhD student at UCL, focusing on generative models research and probabilistic modelling, from variational inference to generative adversarial networks and reinforcement learning. Prior to joining DeepMind, she worked for Google on using deep learning to solve natural language processing tasks. She has an MEng in Computing from Imperial College London. About the lecture series: The Deep Learning Lecture Series is a collaboration between DeepMind and the UCL Centre for Artificial Intelligence. Over the past decade, Deep Learning has evolved as the leading artificial intelligence paradigm providing us with the ability to learn complex functions from raw data at unprecedented accuracy and scale. Deep Learning has been applied to problems in object recognition, speech recognition, speech synthesis, forecasting, scientific computing, control and many more. The resulting applications are touching all of our lives in areas such as healthcare and medical research, human-computer interaction, communication, transport, conservation, manufacturing and many other fields of human endeavour. In recognition of this huge impact, the 2019 Turing Award, the highest honour in computing, was awarded to pioneers of Deep Learning. In this lecture series, research scientists from leading AI research lab, DeepMind, deliver 12 lectures on an exciting selection of topics in Deep Learning, ranging from the fundamentals of training neural networks via advanced ideas around memory, attention, and generative modelling to the important topic of responsible innovation.
Subscribe to:
Post Comments (Atom)
-
JavaやC++で作成された具体的なルールに従って動く従来のプログラムと違い、機械学習はデータからルール自体を推測するシステムです。機械学習は具体的にどのようなコードで構成されているでしょうか? 機械学習ゼロからヒーローへの第一部ではそのような疑問に応えるため、ガイドのチャー...
-
#deeplearning #noether #symmetries This video includes an interview with first author Ferran Alet! Encoding inductive biases has been a lo...
-
Using GPUs in TensorFlow, TensorBoard in notebooks, finding new datasets, & more! (#AskTensorFlow) [Collection] In a special live ep...
-
How to Do PS2 Filter (Tiktok PS2 Filter Tutorial), AI tiktok filter Create your own PS2 Filter photos with this simple guide! 🎮📸 Please...
-
Challenge scenario You were recently hired as a Machine Learning Engineer at a startup movie review website. Your manager has tasked you wit...
-
#ai #attention #transformer #deeplearning Transformers are famous for two things: Their superior performance and their insane requirements...
-
Visual scenes are often comprised of sets of independent objects. Yet, current vision models make no assumptions about the nature of the p...
-
Hello Friends, In this episode we will explore AI tool Craiyan which helps us to create images just by providing the text information. ht...
-
Why are humans so good at video games? Maybe it's because a lot of games are designed with humans in mind. What happens if we change t...
-
#alibi #transformers #attention Transformers are essentially set models that need additional inputs to make sense of sequence data. The mo...
No comments:
Post a Comment