Resource of free step by step video how to guides to get you started with machine learning.
Saturday, February 29, 2020
This Neural Network Creates 3D Objects From Your Photos
❤️ Check out Lambda here and sign up for their GPU Cloud: https://ift.tt/35NkCT7 📝 The paper "Learning to Predict 3D Objects with an Interpolation-based Differentiable Renderer" is available here: https://ift.tt/2yF9f0j 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://ift.tt/2icTBUb Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/karoly_zsolnai Web: https://ift.tt/1NwkG9m #DIB-R
Friday, February 28, 2020
CodeBERT
This video explains how CodeBERT bridges information between natural language documentation and corresponding code pairs. CodeBERT is pre-trained with Masked Language Modeling and Replaced Token Detection and fine-tuned on tasks like Code Search from Natural Language and Generating Documentation. I am excited about the future of these kinds of tools, although I wish they were around when I started coding! Paper Link: CodeBERT: https://ift.tt/2uXq4FL Thanks for watching! Please Subscribe!
Thursday, February 27, 2020
Neural Architectures for Video Encoding - Xavier Giro - UPC TelecomBCN Barcelona 2020
This lecture summarizes the main trends in deep neural networks for video encoding. Including single frame models, spatiotemporal convolutionals, long term sequence modeling with RNNs and their combinaction with optical flow.
Automatic Shortcut Removal for Self-Supervised Learning
This algorithm makes sure self-supervised learning tasks like rotation prediction or colorization result in semantic representations for downstream transfer! The Lens filter can also be easily stacked with other representation learning methods like SimCLR! Paper Link: https://ift.tt/37R4Qq9 Thanks for watching! Please Subscribe!
Recurrent Neural Networks - Xavier Giro - UPC Master Vision Barcelona 2020
This lecture provides an introduction to recurrent neural networks, which include a layer whose hidden state is aware of its values in a previous time-step. This video was recorded as part of the Master in Computer Vision Barcelona 2019/2020, in the Module 6 dedicated to Video Analysis. https://ift.tt/386d0v3
Tuesday, February 25, 2020
Sequencing - Turning sentence into data (NLP Zero to Hero, part 2)
Welcome to Zero to Hero for Natural Language Processing using TensorFlow! If you’re not an expert on AI or ML, don’t worry -- we’re taking the concepts of NLP and teaching them from first principles with our host Laurence Moroney (@lmoroney). In the last video you learned about how to tokenize words using TensorFlow’s tools. In this video you’ll take that to the next step -- creating sequences of numbers from your sentences, and using tools to process them to make them ready for teaching neural networks. Links: Codelab → https://goo.gle/tfw-nlp2 Coding TensorFlow → https://goo.gle/2Y43cN4 Subscribe to the TensorFlow channel → https://goo.gle/TensorFlow
The Story of Light
📝 The paper "Unifying points, beams, and paths in volumetric light transport simulation" is available here: https://ift.tt/2uxL31y Eric Veach's thesis with Multiple Importance Sampling is available here: https://ift.tt/2VkKaUV My Light Simulation Course at the TU Wien is available here: https://www.youtube.com/playlist?list=PLujxSBD-JXgnGmsn7gEyN28P1DnRZG7qi We are hiring! I recommend the topic "Lighting Simulation For Architectural Design": https://ift.tt/3ca3XMI My educational light transport program and 1D MIS implementation is available here: https://ift.tt/2l4Bfpl Wojciech Jarosz's Beams paper is available here: https://ift.tt/2TdHLZJ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://ift.tt/2icTBUb - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://ift.tt/2icTBUb Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/karoly_zsolnai Web: https://ift.tt/1NwkG9m
Monday, February 24, 2020
AI Weekly Update - February 24th, 2020 (#17)
The Annotated GPT-2: https://ift.tt/2SIdnYg DermGAN: https://ift.tt/2HCApJF Subclass Distillation: https://ift.tt/2vYqiMO Predicting how well neural networks will scale: https://ift.tt/2PmAuW7 CodeBERT: https://ift.tt/2PsXhjp Transformers as Soft Reasoners: https://ift.tt/2Td1I2x Short Removal for Self-Supervised Learning: https://ift.tt/37R4Qq9 Torchmeta: https://ift.tt/2PfOSQd Deep Dive into the Reformer: https://ift.tt/2HVudMU Molecule Attention Transformer: https://ift.tt/3c5sMtm Multi-Agent Reinforcement Learning: https://ift.tt/38WAljZ Beyond BERT? https://ift.tt/2V8H0Ue The 2010s: Our Decade of Deep Learning: https://ift.tt/39YKEnO Fundamentals of NLP - Chapter 1: https://ift.tt/2HQk4RN MIT Tech Review OpenAI: https://ift.tt/2HAXmgc NVIDIA Deep 6: https://ift.tt/2vMjIZQ NVIDIA GTC Smart Robots: https://ift.tt/39LlFV4 NVIDIA Retail and AI: https://ift.tt/2v1hqGd Allen Institute of AI Newsletter: https://ift.tt/2HQk5Fl DairAI NLP Newsletter: https://ift.tt/3a8Zvwf Thanks for watching! Please Subscribe!
DeepFake Chatbots
Google just released a paper describing a chatbot titled "Meena", and they claimed that it's the most human-like chatbot ever created. That's a big claim! They demonstrated several conversations across a wide variety of topics in which Meena was able to skillfully joke, argue, and question a human in a realistic way. In this episode, we'll analyze Meena's architecture by comparing it to the previous generations of chatbots, then build our own 'DeepFake' chatbot using state of the art tools for text, audio, and video generation. I'm using the increasingly popular term DeepFake here because deep learning will increasingly be used to mimic/fake human personalities. These free, public tools are becoming incredibly powerful, and I hope this gives you a sense of what you can build today with modern language models. Enjoy! Subscribe for more educational videos! It means a lot to me. TWITTER: https://bit.ly/2OHYLbB WEBSITE: https://bit.ly/2OoVPQF INSTAGRAM: https://bit.ly/312pLUb FACEBOOK: https://bit.ly/2OqOhx1 Transformers Library: https://ift.tt/2lgsdXY Text Generator Colab: https://ift.tt/2vCEjzH Voice Cloning Library: https://ift.tt/2X3F0tF Voice Cloning Colab: https://ift.tt/37S5rrB ObamaNet: https://ift.tt/3c3olzh Presidential speeches dataset: https://ift.tt/2uqj3ga Eliza: https://ift.tt/39SYYye Megatron: https://ift.tt/2Z3ed1w Meena blog post: https://ift.tt/2RXzPuY Karpathy's blog post: https://ift.tt/1c7GM5h Are you a total beginner to machine learning? Watch this: https://www.youtube.com/watch?v=Cr6VqTRO1v0 Learn Python: https://www.youtube.com/watch?v=T5pRlIbr6gg Hit the Join button above to sign up to become a member of my channel for access to exclusive live streams! Signup for my newsletter for exciting updates in the field of AI: https://goo.gl/FZzJ5w Credits: Python tools: DFBlue, Max Woolf, Google, kfogel, Jay Alammar, CorentinJ, HuggingFace non-meme image assets are from Google Image Search And please support me on Patreon: https://ift.tt/2cMCk13
Deep Learning for Symbolic Mathematics
This model solves integrals and ODEs by doing seq2seq! https://ift.tt/36bp5P7 https://ift.tt/2QSutl8 Abstract: Neural networks have a reputation for being better at solving statistical or approximate problems than at performing calculations or working with symbolic data. In this paper, we show that they can be surprisingly good at more elaborated tasks in mathematics, such as symbolic integration and solving differential equations. We propose a syntax for representing mathematical problems, and methods for generating large datasets that can be used to train sequence-to-sequence models. We achieve results that outperform commercial Computer Algebra Systems such as Matlab or Mathematica. Authors: Guillaume Lample, François Charton Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB
Subclass Distillation
This video explains the new Subclass Distillation technique from Google AI! Subclass Distillation is an interesting extension to Knowledge Distillation that tasks the Teacher to invent subclasses and produce a more information dense distribution for the Student. This is implemented with a contrastive auxiliary loss in the Teacher's training! Paper Links: Subclass Distillation: https://ift.tt/32k7WlA Distilling the Knowledge in a Neural Network: https://ift.tt/2i90TEN Self-Training with Noisy Student: https://ift.tt/2Q8GfYV DistillBERT: https://ift.tt/2qjRBhG Thanks for watching! Please Subscribe!
Friday, February 21, 2020
NeurIPS 2020 Changes to Paper Submission Process
My thoughts on the changes to the paper submission process for NeurIPS 2020. The main new changes are: 1. ACs can desk reject papers 2. All authors have to be able to review if asked 3. Resubmissions from other conferences must be marked and a summary of changes since the last submission must be provided 4. Borader societal / ethical impact must be discussed 5. Upon acceptance, all papers must link to an explanatory video and the PDFs for slides and poster https://ift.tt/38NHhQw https://youtu.be/361h6lHZGDg Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB
Thursday, February 20, 2020
Natural Language Processing - Tokenization (Zero to Hero, part 1)
Welcome to Zero to Hero for Natural Language Processing using TensorFlow! If you’re not an expert on AI or ML, don’t worry -- we’re taking the concepts of NLP and teaching them from first principles with our host Laurence Moroney (@lmoroney). In this first lesson we’ll talk about how to represent words in a way that a computer can process them, with a view to later training a neural network to understand their meaning. Links: Hands-on Colab → https://goo.gle/2uO6Gee Coding TensorFlow → https://goo.gle/2Y43cN4 Subscribe to the TensorFlow channel → https://goo.gle/TensorFlow
Wednesday, February 19, 2020
Get ready for TensorFlow Dev Summit 2020!
The TensorFlow Dev Summit is happening on March 11-12! Watch the livestream on the #TFDevSummit website for sessions on TensorFlow updates for researchers, production scaling, improvements across platforms, and amazing use cases by open-source contributors and researchers. Sign up for livestream and event announcements on the TensorFlow Dev Summit website, and join the conversation with the official #TFDevSummit hashtag. See you on March 11 for the TensorFlow Dev Summit! #PoweredbyTF Links: TensorFlow Dev Summit website → https://goo.gle/39Ojyj6 Sign up here for updates → https://goo.gle/2HCjLKr Subscribe to TensorFlow → https://goo.gle/TensorFlow
Tuesday, February 18, 2020
Improved Consistency Regularization for GANs
This video explores a new technique for using the same Consistency Regularization headlining advances in Unsupervised Learning such as FixMatch and SimCLR to the GAN framework! This achieves large improvements in FID scores generating ImageNet images! Paper Links: Improved Consistency Regularization for GANs: https://ift.tt/2SyeRUX Robert Luxemburg's StyleGAN2 Interpolation Loop: https://www.youtube.com/watch?v=6E1_dgYlifc BigGAN Paper: https://ift.tt/328NqnC BigBiGAN: https://ift.tt/2LKu9D8 StyleGAN2: https://ift.tt/325Ino8 Unsupervised Data Augmentation: https://ift.tt/37B0FPb FixMatch: https://ift.tt/3bU1Bll SimCLR: https://ift.tt/31TZZTM Conditional GANs: https://ift.tt/2rPVlDw Thanks for watching! Please Subscribe!
This Neural Network Turns Videos Into 60 FPS!
❤️ Check out Weights & Biases here and sign up for a free demo here: https://ift.tt/2YuG7Yf Their blog post on hyperparameter optimization is available here: https://ift.tt/2uJpsmp 📝 The paper "Depth-Aware Video Frame Interpolation" and its source code are available here: https://ift.tt/2uB8QNR The promised playlist with a TON of interpolated videos: https://www.youtube.com/playlist?list=PLDi8wAVyouYNDl7gGdSbWKdRxIogfeD3H 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://ift.tt/2icTBUb Far Cry video source by N00MKRAD: https://www.youtube.com/watch?v=tW0cvyut7Gk&list=PLDi8wAVyouYNDl7gGdSbWKdRxIogfeD3H&index=20 Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/karoly_zsolnai Web: https://ift.tt/1NwkG9m #DainApp
SimCLR Explained!
SimCLR is able to achieve the same (~76.5% top-1 ImageNet accuracy) as a ResNet-50 trained with Supervised Learning. SimCLR has also set a new high for Semi-Supervised Learning on 1% and 10% of the data, and it performs as well as Transfer Learning from models pre-trained on ImageNet classification! This paper explains the details of the algorithm such as the composition of data augmentations, separate projection from representation to contrastive loss, and the role of scaling up in unsupervised learning! Links: SimCLR: https://ift.tt/31TZZTM CPC: https://ift.tt/2SUqOTJ ImageBERT: https://ift.tt/398zzAe Google AI Blog: Revisiting the unreasonable effectiveness of data: https://ift.tt/2sDB59U Thanks for watching! Please Subscribe!
Monday, February 17, 2020
AI Weekly Update - February 17th, 2020 (#16)
ZeRO & DeepSpeed: https://ift.tt/2ScRwYK Turing-NLG: https://ift.tt/2wpXZqQ DeepMind Compressive Transformer and PG-19: https://ift.tt/2uqNAdY SimCLR: https://ift.tt/31TZZTM Improved Consistency Regularization for GANs: https://ift.tt/2SyeRUX Growing Neural Cellular Automata: https://ift.tt/2ShegXn Kaggle Abstraction and Reasoning Challenge: https://ift.tt/2SqHIKK Facebook Read to Fight Monsters: https://ift.tt/2HyyqGo FastAI Paper: https://ift.tt/2V1l0uC AI2 RoboTHOR: https://ift.tt/37ziiit Google AI Learning to See Transparent Objects: https://ift.tt/38ooTNI Google AI AutoFlip: https://ift.tt/31SxshF MIT Sensorized Skin: https://ift.tt/39G48NW MIT Wikipedia Correction: https://ift.tt/2uc2x3q PyTorch3D Chamfer Loss Demo: https://ift.tt/37Cy9g9 NLP Newsletter: https://ift.tt/2Hzx3ay Thanks for watching! Please Subscribe!
Coronavirus Deep Learning Competition
Coronavirus is turning out to be one of the deadliest disease outbreaks of all time. The people that are fighting this disease need a solution now, not a year from now. As such, I'm hosting a 2-week $3500 Coronavirus Deep Learning Competition for anyone in this community to participate. The goal is to use deep learning to find a potential cure or treatment, then we'll send samples of the compound to the Wuhan Institute of Virology for further analysis. This is the perfect opportunity to show the world how open-source, community-driven AI can effect positive, relevant change. This is the AI-Human collaborative "Deep Blue" moment, a moment where, given AI tools, a human or group of humans will be able to accomplish an extraordinary feat that they couldn't otherwise. In this episode, I'll explain the details of the competition, the details of the Coronavirus genome, which teams are currently using AI to fight it, and we'll go through the necessary steps to generate and test candidate molecules with deep learning and PyRX (molecular docking software). Enjoy! Sign up page: https://ift.tt/3bKg6b9 TWITTER: https://bit.ly/2OHYLbB WEBSITE: https://bit.ly/2OoVPQF INSTAGRAM: https://bit.ly/312pLUb FACEBOOK: https://bit.ly/2OqOhx1 Subscribe for more educational videos! It means a lot to me. Are you a total beginner to machine learning? Watch this: https://www.youtube.com/watch?v=Cr6VqTRO1v0 Want to meet potential teammates? Try our slack channel: https://ift.tt/2mnZNXX Drug Discovery with GANs: https://www.youtube.com/watch?v=hY9Bc3mtphs Drug Engineering: https://www.youtube.com/watch?v=ya3AdrfKYzc Learn Python: https://www.youtube.com/watch?v=T5pRlIbr6gg Coronavirus news: https://ift.tt/2QR5Vt7 PyRX Download: https://ift.tt/38N686M AutoDock Vina: https://ift.tt/39LIjML LSTMChem Network: https://ift.tt/2OYMk8D 30+ Drug Discovery Neural nets: https://ift.tt/2P1rK7M Hit the Join button above to sign up to become a member of my channel for access to exclusive live streams! Signup for my newsletter for exciting updates in the field of AI: https://goo.gl/FZzJ5w Credits: World Health Organization TopazApe Image assets/animations are from across the Web, i take no credit for them (except some memes) And please support me on Patreon: https://ift.tt/2cMCk13
Saturday, February 15, 2020
Neural Portrait Relighting is Here!
❤️ Check out Weights & Biases here and sign up for a free demo here: https://ift.tt/2YuG7Yf Their blog post and example project are available here: - https://ift.tt/2O8LlCS - https://ift.tt/2XDAHqO 📝 The paper "Deep Single Image Portrait Relighting" is available here: https://ift.tt/2SudLtn ☀️ Our "Separable Subsurface Scattering" paper with source code is available here: https://ift.tt/2YhJnn0 ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://ift.tt/2icTBUb - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://ift.tt/2icTBUb Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/karoly_zsolnai Web: https://ift.tt/1NwkG9m
Thursday, February 13, 2020
Generating Wikipedia by Summarizing Long Sequences
This video explores the paper "Generating Wikipedia by Summarizing Long Sequences". Natural Language Processing models that can generate summaries of source documents on a single topic such as "Generative Adversarial Networks" or "Reinforcement Learning" are one of the NLP applications that I find the most interesting! This paper is frequently cited for introducing the Transformer decoder architecture, but there is a lot more interesting details about this paper. I also think the approximations proposed to full attention in this paper are really interesting! Paper Link: Thanks for watching! Please Subscribe!
GPT2 Explained!
This video explores the GPT-2 paper "Language Models are Unsupervised Multitask Learners". The paper has this title because their experiments show how massive language models trained on massive datasets can perform tasks like Question Answering and Translation by carefully formatting them as language modeling inputs. Paper Links GPT-2 Paper: https://ift.tt/37nmpy1 AllenNLP GPT-2 Demo: https://ift.tt/38reJvT The Illustrated GPT-2: https://ift.tt/2TnPzHT Combining GPT2 and BERT to make a fake person: https://ift.tt/2H60hxa Thanks for watching! Please Subscribe!
Wednesday, February 12, 2020
GPT Explained!
This video explains the original GPT model, "Improving Language Understanding by Generative Pre-Training". I think the key takeaways are understanding that they use a new unlabeled text dataset that requires the pre-training language modeling to incorporate longer range context, the way that they format input representations for supervised fine-tuning, and the different NLP tasks this is evaluated on! Paper Links: GPT: https://ift.tt/2HeACni DeepMind "A new model and dataset for long range memory": https://ift.tt/2uqNAdY SQuAD: https://ift.tt/2SKNJkC MultiNLI: https://ift.tt/2wcOOWJ RACE: https://ift.tt/2HjT24U Quora Question Pairs: https://ift.tt/30VBCTP CoLA: https://ift.tt/2SIZaZM Thanks for watching! Please Subscribe!
Growing Neural Cellular Automata
The Game of Life on steroids! This model learns to grow complex patterns in an entirely local way. Each cell is trained to listen to its neighbors and update itself in a way such that, collectively, an overall goal is reached. Fascinating and interactive! https://ift.tt/2ShegXn https://ift.tt/1iAEhpS Abstract: Most multicellular organisms begin their life as a single egg cell - a single cell whose progeny reliably self-assemble into highly complex anatomies with many organs and tissues in precisely the same arrangement each time. The ability to build their own bodies is probably the most fundamental skill every living creature possesses. Morphogenesis (the process of an organism’s shape development) is one of the most striking examples of a phenomenon called self-organisation. Cells, the tiny building blocks of bodies, communicate with their neighbors to decide the shape of organs and body plans, where to grow each organ, how to interconnect them, and when to eventually stop. Understanding the interplay of the emergence of complex outcomes from simple rules and homeostatic 1 feedback loops is an active area of research. What is clear is that evolution has learned to exploit the laws of physics and computation to implement the highly robust morphogenetic software that runs on genome-encoded cellular hardware. This process is extremely robust to perturbations. Even when the organism is fully developed, some species still have the capability to repair damage - a process known as regeneration. Some creatures, such as salamanders, can fully regenerate vital organs, limbs, eyes, or even parts of the brain! Morphogenesis is a surprisingly adaptive process. Sometimes even a very atypical development process can result in a viable organism - for example, when an early mammalian embryo is cut in two, each half will form a complete individual - monozygotic twins! The biggest puzzle in this field is the question of how the cell collective knows what to build and when to stop. The sciences of genomics and stem cell biology are only part of the puzzle, as they explain the distribution of specific components in each cell, and the establishment of different types of cells. While we know of many genes that are required for the process of regeneration, we still do not know the algorithm that is sufficient for cells to know how to build or remodel complex organs to a very specific anatomical end-goal. Thus, one major lynch-pin of future work in biomedicine is the discovery of the process by which large-scale anatomy is specified within cell collectives, and how we can rewrite this information to have rational control of growth and form. It is also becoming clear that the software of life possesses numerous modules or subroutines, such as “build an eye here”, which can be activated with simple signal triggers. Discovery of such subroutines and a mapping out of the developmental logic is a new field at the intersection of developmental biology and computer science. An important next step is to try to formulate computational models of this process, both to enrich the conceptual toolkit of biologists and to help translate the discoveries of biology into better robotics and computational technology. Imagine if we could design systems of the same plasticity and robustness as biological life: structures and machines that could grow and repair themselves. Such technology would transform the current efforts in regenerative medicine, where scientists and clinicians seek to discover the inputs or stimuli that could cause cells in the body to build structures on demand as needed. To help crack the puzzle of the morphogenetic code, and also exploit the insights of biology to create self-repairing systems in real life, we try to replicate some of the desired properties in an in silico experiment. Authors: Alexander Mordvintsev, Ettore Randazzo, Eyvind Niklasson, Michael Levin
Tuesday, February 11, 2020
Neural Structured Learning - Part 4: Adversarial learning for image classification
Welcome to the 4th episode of this Neural Structured Learning series. In this video, we are going to talk about learning with implicit structured signals constructed from adversarial learning. Links: Guide & Tutorials → https://goo.gle/2SBwQIH Coding TensorFlow → https://goo.gle/2Y43cN4 Subscribe to the TensorFlow channel → https://goo.gle/TensorFlow
CUDA Neural Networks
CUDA stands for Compute Unified Device Architecture, and it’s the reason popular deep learning libraries like Tensorflow & PyTorch are considered “GPU-accelerated”. CUDA is Nvidia’s programming platform that enables developers to leverage the full parallel processng capabilities of GPUs for deep learning applications. Almost all of the major deep learning libraries use CUDA under the hood, but it’s not really something that most developers think about often. In this episode, I’ll demo some progressively more complex CUDA examples by Nvidia to show you how using CUDA results in algorithmic speedups. We’ll use Nvidia’s profiler to clock speeds, then we’ll analyze a pure-CUDA neural network by Sergey Bugrov to understand how a full neural pipeline on the GPU looks like. Enjoy! TWITTER: https://bit.ly/2OHYLbB WEBSITE: https://bit.ly/2OoVPQF INSTAGRAM: https://bit.ly/312pLUb FACEBOOK: https://bit.ly/2OqOhx1 Subscribe for more educational videos! It means a lot to me. Notebook shown in the video can be found here. It’s kind of messy! It’s a compilation of various code samples by the Nvidia team + Sergey’s neural network. It’s also got the CUDA install steps for Colab: https://bit.ly/2uzg9FZ Nvidia’s CUDA Documentation: https://ift.tt/39p1USZ Some awesome tutorials by Nvidia on CUDA that helped me: https://ift.tt/2ShN7Uf https://ift.tt/3bqURLk https://ift.tt/2N22RIw Are you a total beginner to machine learning? Watch this: https://www.youtube.com/watch?v=Cr6VqTRO1v0 Learn Python: https://www.youtube.com/watch?v=T5pRlIbr6gg Live C Programming: https://www.youtube.com/watch?v=giF8XoPTMFg CUDA Explained: https://www.youtube.com/watch?v=1cHx1baKqq0 Hit the Join button above to sign up to become a member of my channel for access to exclusive live streams! Signup for my newsletter for exciting updates in the field of AI: https://goo.gl/FZzJ5w Credits: Nvidia team Sergei Bugrov Image assets are from across the Web, i take no credit for them And please support me on Patreon: https://ift.tt/2cMCk13
OpenAI Performs Surgery On A Neural Network to Play DOTA 2
❤️ Check out Linode here and get $20 free credit on your account: https://ift.tt/2LaDQJb 📝 The paper "Dota 2 with Large Scale Deep Reinforcement Learning" is available here: https://ift.tt/2SiJYDH https://ift.tt/2sopEGy 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://ift.tt/2icTBUb Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/karoly_zsolnai Web: https://ift.tt/1NwkG9m
Turing-NLG, DeepSpeed and the ZeRO optimizer
Microsoft has trained a 17-billion parameter language model that achieves state-of-the-art perplexity. This video takes a look at the ZeRO optimizer that enabled this breakthrough. ZeRO allows you to do model- and data-parallelism without having huge cuts in training speed. https://ift.tt/2OFzlJa https://ift.tt/2ScRwYK https://ift.tt/2S8L84U https://ift.tt/2OMUTU5 Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB
Monday, February 10, 2020
Cheapest Deep Learning PC in 2020
Deep Learning is the the most exciting subfield of Artificial Intelligence, yet the necessary hardware costs keep many people from participating in its research and development. I wanted to see just how cheap a deep learning PC could be built for in 2020, so I did some research and put together a deep learning PC build containing brand new parts that comes out to about 450 US dollars. I chose NewEgg for the parts because it has a global shipping policy, deep learning belongs to the world not just the United States. In this episode, I’m going to walk you through what the deep learning stack looks like (CUDA, Jupyter, PyTorch, etc.) , why i chose the various hardware components, and then I’ll show you how to setup the full deep learning software stack on your PC. Enjoy! TWITTER: https://bit.ly/2OHYLbB WEBSITE: https://bit.ly/2OoVPQF INSTAGRAM: https://bit.ly/312pLUb FACEBOOK: https://bit.ly/2OqOhx1 Please subscribe for more educational videos! It means a lot to me. DIY Deep Learning PC parts list (about $450): ------------------------------------------------------- GPU (GTX 1650): https://bit.ly/31Jb4Hu Motherboard (MSI A320M ): https://bit.ly/2uyCaop Hard Drive (Seagate Firecuda 1TB): https://bit.ly/2tKxUSi RAM (SK Hynix 8 GB): https://bit.ly/2UMpWD8 Power Supply (Corsair 450W): https://bit.ly/2w74yOT CPU (AMD Ryzen 3 Series 4 Core 3.1 ghz): https://bit.ly/31HwiFl PC Case (2 fans built-in): https://bit.ly/39p7IMo -------------------------------------------------------- Note* - each part price is always fluctuating +/- 10 dollars in price The ABS $600 pre-built pc: https://bit.ly/2OJjcCh PyTorch’s Image Classifier Example: https://ift.tt/2ErR9lj Linus Tech Tips POV PC Build Guide: https://www.youtube.com/watch?v=v7MYOpFONCU Instructables PC Build Guide: https://ift.tt/3blIn7P Nvidia’s CUDA Documentation: https://ift.tt/39p1USZ Docker: http://docker.com/ Petronetto’s Deep Learning Docker Image: https://ift.tt/37ifC8C Another Deep Learning Docker Image: https://ift.tt/2D2Wn6x Are you a total beginner to machine learning? Watch this: https://www.youtube.com/watch?v=Cr6VqTRO1v0 Learn Python: https://www.youtube.com/watch?v=T5pRlIbr6gg Live C Programming: https://www.youtube.com/watch?v=giF8XoPTMFg CUDA Explained: https://www.youtube.com/watch?v=1cHx1baKqq0 Hit the Join button above to sign up to become a member of my channel for access to exclusive live streams! Signup for my newsletter for exciting updates in the field of AI: https://goo.gl/FZzJ5w Can't afford a PC right now? That's OK, use Google Colab for a free cloud GPU: https://ift.tt/2zxtOdA Credits: Nvidia team PyTorch team Image/GIF assets are from across the Web, i take no credit for them (except some memes) Comedy Central (“Nathan for you” clip) And please support me on Patreon: https://ift.tt/2cMCk13
Saturday, February 8, 2020
This Neural Network Restores Old Videos
❤️ Check out Weights & Biases here and sign up for a free demo: https://ift.tt/2YuG7Yf Their blog post on training neural networks is available here: https://ift.tt/2NFJght 📝 The paper "DeepRemaster: Temporal Source-Reference Attention Networks for Comprehensive Video Enhancement" is available here: https://ift.tt/39kynKc 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://ift.tt/2icTBUb Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/karoly_zsolnai Web: https://ift.tt/1NwkG9m
Friday, February 7, 2020
ImageBERT
This video explores the ImageBERT model from Microsoft Research! This is a really interesting combination of vision and language tokens to achieve state of the art results on MSCOCO and Flickr30k image and text retrieval tasks! I hope this video helped you get a better sense of how image and text tokens can be combined in the transformer architecture and how self-attention uses visual tokens to inform the text task output of BERT's masked language modeling! Paper Links: ImageBERT: https://ift.tt/398zzAe Conceptual Captions: https://ift.tt/2M1bcIx Thanks for watching! Please Subscribe!
Subscribe to:
Posts (Atom)
-
Using GPUs in TensorFlow, TensorBoard in notebooks, finding new datasets, & more! (#AskTensorFlow) [Collection] In a special live ep...
-
JavaやC++で作成された具体的なルールに従って動く従来のプログラムと違い、機械学習はデータからルール自体を推測するシステムです。機械学習は具体的にどのようなコードで構成されているでしょうか? 機械学習ゼロからヒーローへの第一部ではそのような疑問に応えるため、ガイドのチャー...
-
#deeplearning #noether #symmetries This video includes an interview with first author Ferran Alet! Encoding inductive biases has been a lo...
-
How to Do PS2 Filter (Tiktok PS2 Filter Tutorial), AI tiktok filter Create your own PS2 Filter photos with this simple guide! 🎮📸 Please...
-
#ai #attention #transformer #deeplearning Transformers are famous for two things: Their superior performance and their insane requirements...
-
K Nearest Neighbors Application - Practical Machine Learning Tutorial with Python p.14 [Collection] In the last part we introduced Class...
-
Machine Learning in Python using Visual Studio | Getting Started Python is a popular programming language. It was created by Guido van Ross...
-
We Talked To Sophia — The AI Robot That Once Said It Would 'Destroy Humans' [Collection] This AI robot once said it wanted to de...
-
Programming R Squared - Practical Machine Learning Tutorial with Python p.11 [Collection] Now that we know what we're looking for, l...
-
#minecraft #neuralnetwork #backpropagation I built an analog neural network in vanilla Minecraft without any mods or command blocks. The n...