Resource of free step by step video how to guides to get you started with machine learning.
Wednesday, June 30, 2021
[ML News] CVPR bans social media paper promotion | AI restores Rembrandt | GPU prices down
#cvpr #socialmedia #machinelearning In this week's ML news we look at CVPR's controversial action to ban paper promotions on social media during the review phase, among other things! OUTLINE: 0:00 - Intro & Overview 0:25 - CVPR bans social media paper discussions 5:10 - WalMart uses AI to suggest substitutions 6:05 - NVIDIA releases Alias-Free GAN 7:30 - Confession Video in Myanmar possibly a DeepFake 8:50 - AI restores Rembrandt painting 10:40 - AI for healthcare not problem-free yet 11:50 - ML interviews book 12:15 - NVIDIA canvas turns sketches into paintings 13:00 - GPU prices down after crypto shock 13:30 - Facebook AI improves shopping experience 14:05 - DeepLab2 released on GitHub 14:35 - Toxic Language Models: Nobody cares 16:55 - Does AI have common sense? References: CVPR forbids social media promotion https://twitter.com/wjscheirer/status/1408507154219384834 WalMart uses AI to substitute out-of-stock products https://ift.tt/3A2Ng1J NVIDIA releases Alias-Free GAN https://ift.tt/35LCFuf Myanmar Politician's confession could be DeepFake https://ift.tt/3xTnKub Rembrandt restored using AI https://ift.tt/3zYr4G7 AI in healthcare still shaky https://ift.tt/3xIWtKH https://ift.tt/3vIq59P ML interviews book https://ift.tt/3gJNOSB NVIDIA Canvas Beta available https://ift.tt/3xKlvJp GPU prices down as China cracks down on Crypto https://ift.tt/3zHVkoy Facebook AI's big goal of improving shopping https://ift.tt/3dqyc4C GoogleAI releases DeepLab2 https://ift.tt/3zI8l1d Toxic Language Model: Nobody cares https://ift.tt/3Aen5Fz AI has no common sense https://ift.tt/35SaWrP https://6b.eleuther.ai/ Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/2Zo6XRA BiliBili: https://ift.tt/3mfyjkW If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Sunday, June 27, 2021
The Dimpled Manifold Model of Adversarial Examples in Machine Learning (Research Paper Explained)
#adversarialexamples #dimpledmanifold #security Adversarial Examples have long been a fascinating topic for many Machine Learning researchers. How can a tiny perturbation cause the neural network to change its output by so much? While many explanations have been proposed over the years, they all appear to fall short. This paper attempts to comprehensively explain the existence of adversarial examples by proposing a view of the classification landscape, which they call the Dimpled Manifold Model, which says that any classifier will adjust its decision boundary to align with the low-dimensional data manifold, and only slightly bend around the data. This potentially explains many phenomena around adversarial examples. Warning: In this video, I disagree. Remember that I'm not an authority, but simply give my own opinions. OUTLINE: 0:00 - Intro & Overview 7:30 - The old mental image of Adversarial Examples 11:25 - The new Dimpled Manifold Hypothesis 22:55 - The Stretchy Feature Model 29:05 - Why do DNNs create Dimpled Manifolds? 38:30 - What can be explained with the new model? 1:00:40 - Experimental evidence for the Dimpled Manifold Model 1:10:25 - Is Goodfellow's claim debunked? 1:13:00 - Conclusion & Comments Paper: https://ift.tt/3qsSO1f My replication code: https://ift.tt/3quYTu1 Goodfellow's Talk: https://youtu.be/CIfsB_EYsVI?t=4280 Abstract: The extreme fragility of deep neural networks when presented with tiny perturbations in their inputs was independently discovered by several research groups in 2013, but in spite of enormous effort these adversarial examples remained a baffling phenomenon with no clear explanation. In this paper we introduce a new conceptual framework (which we call the Dimpled Manifold Model) which provides a simple explanation for why adversarial examples exist, why their perturbations have such tiny norms, why these perturbations look like random noise, and why a network which was adversarially trained with incorrectly labeled images can still correctly classify test images. In the last part of the paper we describe the results of numerous experiments which strongly support this new model, and in particular our assertion that adversarial perturbations are roughly perpendicular to the low dimensional manifold which contains all the training examples. Abstract: Adi Shamir, Odelia Melamed, Oriel BenShmuel Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/3qcgOFy BiliBili: https://ift.tt/3mfyjkW If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Saturday, June 26, 2021
Simulating The Olympics… On Mars! 🌗
❤️ Check out Lambda here and sign up for their GPU Cloud: https://ift.tt/35NkCT7 📝 The paper "Discovering Diverse Athletic Jumping Strategies" is available here: https://ift.tt/3ayOOFT 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://ift.tt/2icTBUb Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m
Artificial Intelligence Workshop conducted virtually for Grades 9-12 by WiselyWise
Register your school - http://bit.ly/WWSchools WiselyWise successfully conducted a 3 Day’s Artificial Intelligence Virtual Workshop for Students -from Grades 9-12. As you may be aware, we are all part of the AI Revolution. Artificial Intelligence is already impacting all aspects of our daily lives and will be a big part of our younger generations. Learning AI early will give students an edge to get ready for the AI future. WiselyWise conducted this very informative and interactive online Workshop for students of Grades 9-12. This workshop has been designed to give the students exposure to creative thinking, Problem-solving, Mathematical and Computational thinking. With fun games and activities, students were introduced to concepts in AI, Machine Learning, Computer Vision, NLP and Robotics. Here’s a glimpse of their journey with us in this Workshop. Register your school - http://bit.ly/WWSchools #ai, #aiworkshop #learning #WiselyWise #aieducationforschools
Thursday, June 24, 2021
Building AI models for healthcare (ML Tech Talks)
In this session of Machine Learning Tech Talks, Product Manager Lily Peng will discuss the three common myths in building AI models for healthcare. Chapters: 0:00 - Introduction 1:48 - Myth #1: More data is all you need for a better model 6:58 - Myth #2: An accurate model is all you need for a useful product 9:15 - Myth #3: A good product is sufficient for clinical impact 12:19 - Conversation with Kira Whitehouse, Software Engineer 34:48 - Conversation with Scott McKinney, Software Engineer Resources: Deep Learning for Detection of Diabetic Eye Disease: Gulshan et al, Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA 2016 → https://goo.gle/3gVhTxs A major milestone for the treatment of eye disease De Fauw et al, Clinically applicable deep learning for diagnosis and referral in retinal disease. Nature Medicine September 2018 → https://goo.gle/35Sfs9C Assessing Cardiovascular Risk Factors with Computer Vision. Poplin et al, Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nature Biomedical Engineering. March 2018 → https://goo.gle/3qkg01I Improving the Effectiveness of Diabetic Retinopathy Models: Krause et al, Grader Variability and the Importance of Reference Standards for Evaluating Machine Learning Models for Diabetic Retinopathy. Ophthalmology August 2018 → https://goo.gle/3gR8d8n Deep learning versus human graders for classifying diabetic retinopathy severity in a nationwide screening program. Raumviboonsuk et al. NPJ Digital Medicine. April 2019 → https://goo.gle/2SmyXUO Healthcare AI systems that put people at the center: Beede et al, A Human-Centered Evaluation of a Deep Learning System Deployed in Clinics for the Detection of Diabetic Retinopathy. CHI '20 April 2020 → https://goo.gle/3ja6TyP Artificial intelligence for teleophthalmology-based diabetic retinopathy screening in a national programme: an economic analysis modelling study. MScPH, Yuchen Xie, Quang D. Nguyen BEng, Haslina Hamzah BSc, Gilbert Lim, Valentina Bellemo MSc, Dinesh V. Gunasekeran MBBS, Michelle Y. Yip, et al. The Lancet → https://goo.gle/3zVec3q Catch more ML Tech Talks → http://goo.gle/ml-tech-talks Subscribe to TensorFlow → https://goo.gle/TensorFlow
Machine learning AI , pygame and pytorch simulation . Qlearning
This is my first attempt in machine learning . to avoid the boredom of the "Hello world " of AI tutorials which is "is it a cat or dog" , I made this simulation . I used pygame to program a simple game for my AI . The algorithm for the AI is Qlearning programmed with pytorch in python. the agent gets coordinates inputs and velocities . no image processing . After 82 epochs the success rate dopes to 54% ... :( . maybe in the future PPO will help . I have to learn it . #AI #machine learning #qlearning #python #pygame #pytorch
[ML News] Hugging Face course | GAN Theft Auto | AI Programming Puzzles | PyTorch 1.9 Released
#mlnews #gta #weather In this week's ML News, we look at the latest developments in the Machine Learning and AI world with updates from research, industry, and society at large. OUTLINE: 0:00 - Intro 0:20 - Hugging Face launches free course 1:30 - Sentdex releases GAN Theft Auto 2:25 - Facebook uses AI to help moderators 4:10 - Weather with Antonio 5:10 - Autonomous ship aborts mission 7:25 - PyTorch Release 1.9 8:30 - McDonald's new AI drive thru 10:20 - UBS CEO says AI won't replace humans 12:20 - Gödel paper has 90th birthday 12:55 - AugLy data augmentation library 13:20 - Programming Puzzles for autonomous coding 14:30 - Boston Dynamics' Spot turns 1 References: PyTorch 1.9 Released https://ift.tt/3wVqV4e Hugging Face launches course https://ift.tt/3glOX2G 90 years of Gödel's theory https://ift.tt/3vy14ho AugLy: A data augmentation library https://ift.tt/35vg7Oj Sentdex builds GAN Theft Auto https://ift.tt/3xwsnKd Spot turns 1 https://ift.tt/35sZVx6 Autonomous ship aborts mission https://ift.tt/3vCGwnV https://ift.tt/3d6vKjd McDonald's tests AI drive thru https://ift.tt/3vKno7w Facebook uses AI to moderate conversations https://ift.tt/3qphCaA UBS CEO says AI won't replace financial advisors https://ift.tt/3cQSTWO Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/2Zo6XRA BiliBili: https://ift.tt/3mfyjkW If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Wednesday, June 23, 2021
XCiT: Cross-Covariance Image Transformers (Facebook AI Machine Learning Research Paper Explained)
#xcit #transformer #attentionmechanism After dominating Natural Language Processing, Transformers have taken over Computer Vision recently with the advent of Vision Transformers. However, the attention mechanism's quadratic complexity in the number of tokens means that Transformers do not scale well to high-resolution images. XCiT is a new Transformer architecture, containing XCA, a transposed version of attention, reducing the complexity from quadratic to linear, and at least on image data, it appears to perform on par with other models. What does this mean for the field? Is this even a transformer? What really matters in deep learning? OUTLINE: 0:00 - Intro & Overview 3:45 - Self-Attention vs Cross-Covariance Attention (XCA) 19:55 - Cross-Covariance Image Transformer (XCiT) Architecture 26:00 - Theoretical & Engineering considerations 30:40 - Experimental Results 33:20 - Comments & Conclusion Paper: https://ift.tt/3gPTomx Code: https://ift.tt/3zEu3mL Abstract: Following their success in natural language processing, transformers have recently shown much promise for computer vision. The self-attention operation underlying transformers yields global interactions between all tokens ,i.e. words or image patches, and enables flexible modelling of image data beyond the local interactions of convolutions. This flexibility, however, comes with a quadratic complexity in time and memory, hindering application to long sequences and high-resolution images. We propose a "transposed" version of self-attention that operates across feature channels rather than tokens, where the interactions are based on the cross-covariance matrix between keys and queries. The resulting cross-covariance attention (XCA) has linear complexity in the number of tokens, and allows efficient processing of high-resolution images. Our cross-covariance image transformer (XCiT) is built upon XCA. It combines the accuracy of conventional transformers with the scalability of convolutional architectures. We validate the effectiveness and generality of XCiT by reporting excellent results on multiple vision benchmarks, including image classification and self-supervised feature learning on ImageNet-1k, object detection and instance segmentation on COCO, and semantic segmentation on ADE20k. Authors: Alaaeldin El-Nouby, Hugo Touvron, Mathilde Caron, Piotr Bojanowski, Matthijs Douze, Armand Joulin, Ivan Laptev, Natalia Neverova, Gabriel Synnaeve, Jakob Verbeek, Hervé Jegou Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/2Zo6XRA BiliBili: https://ift.tt/3mfyjkW If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Tuesday, June 22, 2021
Burning Down an Entire Virtual Forest! 🌲🔥
❤️ Check out the Gradient Dissent podcast by Weights & Biases: http://wandb.me/gd 📝 The paper "Fire in Paradise: Mesoscale Simulation of Wildfires" is available here: https://ift.tt/3j2W4Pd 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://ift.tt/2icTBUb Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m #gamedev
Saturday, June 19, 2021
Glitter Simulation, Now Faster Than Ever! ✨
❤️ Check out Weights & Biases and sign up for a free demo here: https://ift.tt/2S5tXnb ❤️ Their mentioned post is available here: https://ift.tt/39vhPCn 📝 The paper "Slope-Space Integrals for Specular Next Event Estimation" is available here: https://ift.tt/3xvIlo4 ☀️ Free rendering course: https://ift.tt/2rdtvDu 🔮 Paper with the difficult scene: https://ift.tt/3iT89qf 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://ift.tt/2icTBUb Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m
AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control (Paper Explained)
#reiforcementlearning #gan #imitationlearning Learning from demonstrations is a fascinating topic, but what if the demonstrations are not exactly the behaviors we want to learn? Can we adhere to a dataset of demonstrations and still achieve a specified goal? This paper uses GANs to combine goal-achieving reinforcement learning with imitation learning and learns to perform well at a given task while doing so in the style of a given presented dataset. The resulting behaviors include many realistic-looking transitions between the demonstrated movements. OUTLINE: 0:00 - Intro & Overview 1:25 - Problem Statement 6:10 - Reward Signals 8:15 - Motion Prior from GAN 14:10 - Algorithm Overview 20:15 - Reward Engineering & Experimental Results 30:40 - Conclusion & Comments Paper: https://ift.tt/2S9Uwb0 Main Video: https://www.youtube.com/watch?v=wySUxZN_KbM Supplementary Video: https://www.youtube.com/watch?v=O6fBSMxThR4 Abstract: Synthesizing graceful and life-like behaviors for physically simulated characters has been a fundamental challenge in computer animation. Data-driven methods that leverage motion tracking are a prominent class of techniques for producing high fidelity motions for a wide range of behaviors. However, the effectiveness of these tracking-based methods often hinges on carefully designed objective functions, and when applied to large and diverse motion datasets, these methods require significant additional machinery to select the appropriate motion for the character to track in a given scenario. In this work, we propose to obviate the need to manually design imitation objectives and mechanisms for motion selection by utilizing a fully automated approach based on adversarial imitation learning. High-level task objectives that the character should perform can be specified by relatively simple reward functions, while the low-level style of the character's behaviors can be specified by a dataset of unstructured motion clips, without any explicit clip selection or sequencing. These motion clips are used to train an adversarial motion prior, which specifies style-rewards for training the character through reinforcement learning (RL). The adversarial RL procedure automatically selects which motion to perform, dynamically interpolating and generalizing from the dataset. Our system produces high-quality motions that are comparable to those achieved by state-of-the-art tracking-based techniques, while also being able to easily accommodate large datasets of unstructured motion clips. Composition of disparate skills emerges automatically from the motion prior, without requiring a high-level motion planner or other task-specific annotations of the motion clips. We demonstrate the effectiveness of our framework on a diverse cast of complex simulated characters and a challenging suite of motor control tasks. Authors: Xue Bin Peng, Ze Ma, Pieter Abbeel, Sergey Levine, Angjoo Kanazawa Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/3qcgOFy BiliBili: https://ift.tt/3mfyjkW If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Wednesday, June 16, 2021
[ML News] De-Biasing GPT-3 | RL cracks chip design | NetHack challenge | Open-Source GPT-J
OUTLINE: 0:00 - Intro 0:30 - Google RL creates next-gen TPUs 2:15 - Facebook launches NetHack challenge 3:50 - OpenAI mitigates bias by fine-tuning 9:05 - Google AI releases browseable reconstruction of human cortex 9:50 - GPT-J 6B Transformer in JAX 12:00 - Tensorflow launches Forum 13:50 - Text style transfer from a single word 15:45 - ALiEn artificial life simulator My Video on Chip Placement: https://youtu.be/PDRtyrVskMU References: RL creates next-gen TPUs https://ift.tt/3iyFG8P https://www.youtube.com/watch?v=PDRtyrVskMU Facebook launches NetHack challenge https://ift.tt/356DbCH Mitigating bias by fine-tuning https://ift.tt/3gu2FR2 Human Cortex 3D Reconstruction https://ift.tt/3vTSWIQ GPT-J: An open-source 6B transformer https://ift.tt/3w8Pla5 https://6b.eleuther.ai/ https://ift.tt/3iT6e4G Tensorflow launches "Forum" https://ift.tt/3gBRPYa Text style transfer from single word https://ift.tt/3izg8sj ALiEn Life Simulator https://ift.tt/3fVtAFb Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/2Zo6XRA BiliBili: https://ift.tt/3mfyjkW If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Tuesday, June 15, 2021
Google’s New AI Puts Video Calls On Steroids! 💪
❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers 📝 The paper "Total Relighting: Learning to Relight Portraits for Background Replacement" is available here: https://ift.tt/3gVNh0m 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://ift.tt/2icTBUb Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://ift.tt/2TnVBd3 Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m
Saturday, June 12, 2021
This is Grammar For Robots. What? Why? 🤖
❤️ Check out Lambda here and sign up for their GPU Cloud: https://ift.tt/35NkCT7 📝 The paper "RoboGrammar: Graph Grammar for Terrain-Optimized Robot Design " is available here: https://ift.tt/2Tn99Z1 Building grammar paper: https://ift.tt/2TWZcl7 ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://ift.tt/2icTBUb - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://ift.tt/2icTBUb Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m
Friday, June 11, 2021
Efficient and Modular Implicit Differentiation (Machine Learning Research Paper Explained)
#implicitfunction #jax #autodiff Many problems in Machine Learning involve loops of inner and outer optimization. Finding update steps for the outer loop is usually difficult, because of the.need to differentiate through the inner loop's procedure over multiple steps. Such loop unrolling is very limited and constrained to very few steps. Other papers have found solutions around unrolling in very specific, individual problems. This paper proposes a unified framework for implicit differentiation of inner optimization procedures without unrolling and provides implementations that integrate seamlessly into JAX. OUTLINE: 0:00 - Intro & Overview 2:05 - Automatic Differentiation of Inner Optimizations 4:30 - Example: Meta-Learning 7:45 - Unrolling Optimization 13:00 - Unified Framework Overview & Pseudocode 21:10 - Implicit Function Theorem 25:45 - More Technicalities 28:45 - Experiments ERRATA: - Dataset Distillation is done with respect to the training set, not the validation or test set. Paper: https://ift.tt/3xfBBuh Code coming soon Abstract: Automatic differentiation (autodiff) has revolutionized machine learning. It allows expressing complex computations by composing elementary ones in creative ways and removes the burden of computing their derivatives by hand. More recently, differentiation of optimization problem solutions has attracted widespread attention with applications such as optimization as a layer, and in bi-level problems such as hyper-parameter optimization and meta-learning. However, the formulas for these derivatives often involve case-by-case tedious mathematical derivations. In this paper, we propose a unified, efficient and modular approach for implicit differentiation of optimization problems. In our approach, the user defines (in Python in the case of our implementation) a function F capturing the optimality conditions of the problem to be differentiated. Once this is done, we leverage autodiff of F and implicit differentiation to automatically differentiate the optimization problem. Our approach thus combines the benefits of implicit differentiation and autodiff. It is efficient as it can be added on top of any state-of-the-art solver and modular as the optimality condition specification is decoupled from the implicit differentiation mechanism. We show that seemingly simple principles allow to recover many recently proposed implicit differentiation methods and create new ones easily. We demonstrate the ease of formulating and solving bi-level optimization problems using our framework. We also showcase an application to the sensitivity analysis of molecular dynamics. Authors: Mathieu Blondel, Quentin Berthet, Marco Cuturi, Roy Frostig, Stephan Hoyer, Felipe Llinares-López, Fabian Pedregosa, Jean-Philippe Vert Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/2Zo6XRA BiliBili: https://ift.tt/3mfyjkW If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Thursday, June 10, 2021
Ensemble Learning Part 14 | XGBoost | Machine Learning Tutorial
XGBoost is an algorithm that has recently been dominating applied machine learning and Kaggle competitions for structured or tabular data. It is an implementation of gradient boosted decision trees designed for speed and performance. In this video, you will get explore XGBoost in Ensemble Learning. It is the fourteenth and final part of the Ensemble Learning Playlist. All 14 videos combined teaches Ensemble Learning in an In-Depth Manner. ✅Subscribe to our Channel to learn more about AI, ML and Data Science. InsideAIML’s Artificial Intelligence Masters Program provides training in the skills required for a career in AI. You will master Data Science, Deep Learning, TensorFlow, Machine Learning and other AI concepts. The course is designed by IITians and includes projects on advanced algorithms and artificial neural networks. Learn more at: https://insideaiml.com/courses For more updates on courses and tips follow us on: - Telegram: https://t.me/insideaiml - Instagram: https://www.instagram.com/inside_aiml/ - Twitter: https://twitter.com/insideaiml - LinkedIn: https://www.linkedin.com/company/insideaiml - Facebook: https://www.facebook.com/insideaimledu - Youtube: https://www.youtube.com/channel/UCz5qPOuMdz3oXv-gPO3h9Iw
Wednesday, June 9, 2021
Ensemble Learning Part 13 | Boosting | Bagging | Random Forest | Machine Learning Tutorial
In this video, you will get hands-on with Ensemble Learning Exercise, which comprises Random Forst, Bagging and Boosting models. It is the thirteenth part of the Ensemble Learning Playlist. All 14 videos combined teaches Ensemble Learning in an In-Depth Manner. ✅Subscribe to our Channel to learn more about AI, ML and Data Science. InsideAIML’s Artificial Intelligence Masters Program provides training in the skills required for a career in AI. You will master Data Science, Deep Learning, TensorFlow, Machine Learning and other AI concepts. The course is designed by IITians and includes projects on advanced algorithms and artificial neural networks. Learn more at: https://insideaiml.com/courses For more updates on courses and tips follow us on: - Telegram: https://t.me/insideaiml - Instagram: https://www.instagram.com/inside_aiml/ - Twitter: https://twitter.com/insideaiml - LinkedIn: https://www.linkedin.com/company/insideaiml - Facebook: https://www.facebook.com/insideaimledu - Youtube: https://www.youtube.com/channel/UCz5qPOuMdz3oXv-gPO3h9Iw
Ensemble Learning Part 14 | XGBoost | Machine Learning Tutorial
XGBoost is an algorithm that has recently been dominating applied machine learning and Kaggle competitions for structured or tabular data. It is an implementation of gradient boosted decision trees designed for speed and performance. In this video, you will get explore XGBoost in Ensemble Learning. It is the fourteenth and final part of the Ensemble Learning Playlist. All 14 videos combined teaches Ensemble Learning in an In-Depth Manner. ✅Subscribe to our Channel to learn more about AI, ML and Data Science. InsideAIML’s Artificial Intelligence Masters Program provides training in the skills required for a career in AI. You will master Data Science, Deep Learning, TensorFlow, Machine Learning and other AI concepts. The course is designed by IITians and includes projects on advanced algorithms and artificial neural networks. Learn more at: https://insideaiml.com/courses For more updates on courses and tips follow us on: - Telegram: https://t.me/insideaiml - Instagram: https://www.instagram.com/inside_aiml/ - Twitter: https://twitter.com/insideaiml - LinkedIn: https://www.linkedin.com/company/insideaiml - Facebook: https://www.facebook.com/insideaimledu - Youtube: https://www.youtube.com/channel/UCz5qPOuMdz3oXv-gPO3h9Iw
[ML News] EU regulates AI, China trains 1.75T model, Google's oopsie, Everybody cheers for fraud.
#mlnews #wudao #academicfraud OUTLINE: 0:00 - Intro 0:25 - EU seeks to regulate AI 2:45 - AI COVID detection systems are all flawed 5:05 - Chinese lab trains model 10x GPT-3 size 6:55 - Google error identifies "ugliest" language 9:45 - McDonald's learns about AI buzzwords 11:25 - AI predicts cryptocurrency prices 12:00 - Unreal Engine hack for CLIP 12:35 - Please commit more academic fraud References: https://ift.tt/3clhl2y https://ift.tt/34DgkhW https://ift.tt/3eEFTp6 https://ift.tt/3uEXQbw https://ift.tt/3pBYhCj https://ift.tt/3z1mer0 https://ift.tt/3vUtOSg https://ift.tt/3ce2tTv https://twitter.com/arankomatsuzaki/status/1399471244760649729 https://ift.tt/2SH7FbH Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/2Zo6XRA BiliBili: https://ift.tt/3mfyjkW If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Tuesday, June 8, 2021
Can An AI Heal This Image?👩⚕️
❤️ Check out Weights & Biases and sign up for a free demo here: https://ift.tt/2S5tXnb ❤️ Their mentioned post is available here: https://ift.tt/39vhPCn 📝 The paper "Self-Organising Textures" is available here: https://ift.tt/3d4WIIU Game of Life animation source: https://copy.sh/life/ Game of Life image source: https://ift.tt/2THh6bo 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://ift.tt/2icTBUb Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://ift.tt/2TnVBd3 Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m
Ensemble Learning Part 13 | Boosting | Bagging | Random Forest | Machine Learning Tutorial
In this video, you will get hands-on with Ensemble Learning Exercise, which comprises Random Forst, Bagging and Boosting models. It is the thirteenth part of the Ensemble Learning Playlist. All 14 videos combined teaches Ensemble Learning in an In-Depth Manner. ✅Subscribe to our Channel to learn more about AI, ML and Data Science. InsideAIML’s Artificial Intelligence Masters Program provides training in the skills required for a career in AI. You will master Data Science, Deep Learning, TensorFlow, Machine Learning and other AI concepts. The course is designed by IITians and includes projects on advanced algorithms and artificial neural networks. Learn more at: https://insideaiml.com/courses For more updates on courses and tips follow us on: - Telegram: https://t.me/insideaiml - Instagram: https://www.instagram.com/inside_aiml/ - Twitter: https://twitter.com/insideaiml - LinkedIn: https://www.linkedin.com/company/insideaiml - Facebook: https://www.facebook.com/insideaimledu - Youtube: https://www.youtube.com/channel/UCz5qPOuMdz3oXv-gPO3h9Iw
Monday, June 7, 2021
TensorFlow from the ground up (ML Tech Talks)
In the next talk in our series, Wolff Dobson will discuss 6 easy pieces on what you need to know for TensorFlow from the ground up (tensors, variables, and gradients without using high level APIs). This talk is designed for those that know the basics of Machine Learning but need an overview on the fundamentals of TensorFlow. Chapters: 0:00 - Intro and outline 2:12 - Tensors 6:08 - Variables 9:19 - Gradient tape 13:57 - Modules 17:43 - Training loops 21:52 - tf.function 28:53 - Conclusion Resources: This talk is based on the guides on tensorflow.org See them all (with executable code on Google Colab!) → https://goo.gle/3ije3k5 Tensors → https://goo.gle/34UqV8m Variables → https://goo.gle/3v2Pvyh Introduction to gradients and automatic differentiation → https://goo.gle/3sFVybo Introduction to graphs → https://goo.gle/3w1cGdE Introduction to modules, layers, and models → https://goo.gle/3v0mSC1 Basic training loops → https://goo.gle/3uZ9pu0 Subscribe to TensorFlow → https://goo.gle/TensorFlow
Ensemble Learning Part 11 | AdaBoost | Machine Learning Tutorial
AdaBoost is one of the first boosting algorithms to be adapted in solving practices. Adaboost helps you combine multiple “weak classifiers” into a single “strong classifier”. In this video, you will explore AdaBoost in Ensemble Learning. It is the eleventh part of the Ensemble Learning Playlist. All 14 videos combined teaches Ensemble Learning in an In-Depth Manner. ✅Subscribe to our Channel to learn more about AI, ML and Data Science. InsideAIML’s Artificial Intelligence Masters Program provides training in the skills required for a career in AI. You will master Data Science, Deep Learning, TensorFlow, Machine Learning and other AI concepts. The course is designed by IITians and includes projects on advanced algorithms and artificial neural networks. Learn more at: https://insideaiml.com/courses For more updates on courses and tips follow us on: - Telegram: https://t.me/insideaiml - Instagram: https://www.instagram.com/inside_aiml/ - Twitter: https://twitter.com/insideaiml - LinkedIn: https://www.linkedin.com/company/insideaiml - Facebook: https://www.facebook.com/insideaimledu - Youtube: https://www.youtube.com/channel/UCz5qPOuMdz3oXv-gPO3h9Iw #MachineLearning #DataScience #DeepLearning #Python #AI #ArtificialIntelligence #Ensemble Learning #Stacking #Boosting #Adaboost
Sunday, June 6, 2021
Ensemble Learning Part 10 | Boosting | Stacking | Machine Learning Tutorial
Stacking often considers heterogeneous weak learners, learns them in parallel and combines them by training a meta-model to output a prediction based on the different weak model predictions. On the other hand, boosting often considers homogeneous weak learners, learns them sequentially in a very adaptative way (a base model depends on the previous ones) and combines them following a deterministic strategy. In this video, you will explore Stacking and Boosting in Ensemble Learning Models. It is the tenth part of the Ensemble Learning Playlist. All 14 videos combined teaches Ensemble Learning in an In-Depth Manner. ✅Subscribe to our Channel to learn more about AI, ML and Data Science. InsideAIML’s Artificial Intelligence Masters Program provides training in the skills required for a career in AI. You will master Data Science, Deep Learning, TensorFlow, Machine Learning and other AI concepts. The course is designed by IITians and includes projects on advanced algorithms and artificial neural networks. Learn more at: https://insideaiml.com/courses For more updates on courses and tips follow us on: - Telegram: https://t.me/insideaiml - Instagram: https://www.instagram.com/inside_aiml/ - Twitter: https://twitter.com/insideaiml - LinkedIn: https://www.linkedin.com/company/insideaiml - Facebook: https://www.facebook.com/insideaimledu - Youtube: https://www.youtube.com/channel/UCz5qPOuMdz3oXv-gPO3h9Iw #MachineLearning #DataScience #DeepLearning #Python #AI #ArtificialIntelligence #Ensemble Learning #Stacking #Boosting
Backpropagation - AI & Machine Learning Workshop: The Tutorial before your Tutorial - Part 7
#MachineLearningTutorial #AI #MachineLearning #Tutorial #ScienceandTechnology #ArtificialIntelligence #TensorFlow #Keras #SupervisedLearning #NeuralNetworks #Perceptron #Backpropagation #AND #XOR #DeepLearning #Backpropagation Checkout out Part 1 of this series: https://youtu.be/poQp5N2flOw Checkout out Part 2 of this series: https://youtu.be/3R1ahtudvbM Checkout out Part 3 of this series: https://youtu.be/97CiAjqbCpU Checkout out Part 4 of this series: https://youtu.be/y7_UTqwx5Y0 Checkout out Part 5 of this series: https://youtu.be/9sBj6qcauLU Checkout out Part 6 of this series: https://youtu.be/6AYig0h5klY Artificial Intelligence and Machine Learning with TensorFlow/Keras is a confusing and sometimes incomprehensible subject to learn on your own. The Google Machine Learning Crash Course is a good tutorial to learn AI/ML if you already have a background on the subject. The purpose of this workshop is the be the tutorial before to take the Google tutorial. I've been there and now I'm ready to pass it forward and share what I've learned. I'm not an expert but I have working code examples that I will use to teach you based on my current level of understanding of the subject. Here is the list of topics explained in this Machine Learning basics video: 1.Topics & Recap of Part 6 - (0:20) 2. Road to backpropagation - (1:52) 3. XOR Solution Using A Neural Network - (3:05) 4. Training Perceptrons - (5:42) 4. Error Function - (11:13) 5. Error Gradient - (14:51) 6. Delta Rule & Gradient Descent - (18:19) 7. Training using Backpropagation - (24:07) 8. Backpropagation Algorithm Summary - (28:06) Like/follow us on Facebook: https://www.facebook.com/Black-Magic-AI-109126344070229 Check out our Web site: https://www.blackmagicai.com/ References and Additional Resources Perceptron Training Rule https://youtu.be/7VV_fUe6ziw BACKPROPAGATION algorithm. How does a neural network learn ? A step by step demonstration. https://youtu.be/YOlOLxrMUOw What is backpropagation really doing? | Deep learning, chapter 3 https://youtu.be/Ilg3gGewQ5U Background Music Royalty Free background music from Bensound.com.
Ensemble Learning Part 11 | AdaBoost | Machine Learning Tutorial
AdaBoost is one of the first boosting algorithms to be adapted in solving practices. Adaboost helps you combine multiple “weak classifiers” into a single “strong classifier”. In this video, you will explore AdaBoost in Ensemble Learning. It is the eleventh part of the Ensemble Learning Playlist. All 14 videos combined teaches Ensemble Learning in an In-Depth Manner. ✅Subscribe to our Channel to learn more about AI, ML and Data Science. InsideAIML’s Artificial Intelligence Masters Program provides training in the skills required for a career in AI. You will master Data Science, Deep Learning, TensorFlow, Machine Learning and other AI concepts. The course is designed by IITians and includes projects on advanced algorithms and artificial neural networks. Learn more at: https://insideaiml.com/courses For more updates on courses and tips follow us on: - Telegram: https://t.me/insideaiml - Instagram: https://www.instagram.com/inside_aiml/ - Twitter: https://twitter.com/insideaiml - LinkedIn: https://www.linkedin.com/company/insideaiml - Facebook: https://www.facebook.com/insideaimledu - Youtube: https://www.youtube.com/channel/UCz5qPOuMdz3oXv-gPO3h9Iw #MachineLearning #DataScience #DeepLearning #Python #AI #ArtificialIntelligence #Ensemble Learning #Stacking #Boosting #Adaboost
Saturday, June 5, 2021
Decision Transformer: Reinforcement Learning via Sequence Modeling (Research Paper Explained)
#decisiontransformer #reinforcementlearning #transformer Proper credit assignment over long timespans is a fundamental problem in reinforcement learning. Even methods designed to combat this problem, such as TD-learning, quickly reach their limits when rewards are sparse or noisy. This paper reframes offline reinforcement learning as a pure sequence modeling problem, with the actions being sampled conditioned on the given history and desired future rewards. This allows the authors to use recent advances in sequence modeling using Transformers and achieve competitive results in Offline RL benchmarks. OUTLINE: 0:00 - Intro & Overview 4:15 - Offline Reinforcement Learning 10:10 - Transformers in RL 14:25 - Value Functions and Temporal Difference Learning 20:25 - Sequence Modeling and Reward-to-go 27:20 - Why this is ideal for offline RL 31:30 - The context length problem 34:35 - Toy example: Shortest path from random walks 41:00 - Discount factors 45:50 - Experimental Results 49:25 - Do you need to know the best possible reward? 52:15 - Key-to-door toy experiment 56:00 - Comments & Conclusion Paper: https://ift.tt/3uWbPKb Website: https://ift.tt/3uIt41l Code: https://ift.tt/2TxRi1m Abstract: We present a framework that abstracts Reinforcement Learning (RL) as a sequence modeling problem. This allows us to draw upon the simplicity and scalability of the Transformer architecture, and associated advances in language modeling such as GPT-x and BERT. In particular, we present Decision Transformer, an architecture that casts the problem of RL as conditional sequence modeling. Unlike prior approaches to RL that fit value functions or compute policy gradients, Decision Transformer simply outputs the optimal actions by leveraging a causally masked Transformer. By conditioning an autoregressive model on the desired return (reward), past states, and actions, our Decision Transformer model can generate future actions that achieve the desired return. Despite its simplicity, Decision Transformer matches or exceeds the performance of state-of-the-art model-free offline RL baselines on Atari, OpenAI Gym, and Key-to-Door tasks. Authors: Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/2Zo6XRA BiliBili: https://ift.tt/3mfyjkW If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
A Video Game That Looks Like Reality! 🌴
❤️ Check out Perceptilabs and sign up for a free demo here: https://ift.tt/2WIdXXn 📝 The paper "Enhancing Photorealism Enhancement" is available here: https://ift.tt/3tEO2h9 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://ift.tt/2icTBUb Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://ift.tt/2TnVBd3 Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m
Ensemble Learning Part 9 | Random Forest Algorithm | Machine Learning Tutorial
The random subspace method is a technique used in order to introduce variation among the predictors in an ensemble model. This is done as decreasing the correlation between the predictors increases the performance of the ensemble model. These subsets are then used in order to train the predictors of an ensemble. In this video, you will explore Random Forest Algorithm. It is the ninth part of the Ensemble Learning Playlist. All 14 videos combined teaches Ensemble Learning in an In-Depth Manner. ✅Subscribe to our Channel to learn more about AI, ML and Data Science. InsideAIML’s Artificial Intelligence Masters Program provides training in the skills required for a career in AI. You will master Data Science, Deep Learning, TensorFlow, Machine Learning and other AI concepts. The course is designed by IITians and includes projects on advanced algorithms and artificial neural networks. Learn more at: https://insideaiml.com/courses For more updates on courses and tips follow us on: - Telegram: https://t.me/insideaiml - Instagram: https://www.instagram.com/inside_aiml/ - Twitter: https://twitter.com/insideaiml - LinkedIn: https://www.linkedin.com/company/insideaiml - Facebook: https://www.facebook.com/insideaimledu - Youtube: https://www.youtube.com/channel/UCz5qPOuMdz3oXv-gPO3h9Iw #MachineLearning #DataScience #DeepLearning #Python #AI #ArtificialIntelligence #Ensemble Learning #RandomForest
Ensemble Learning Part 10 | Boosting | Stacking | Machine Learning Tutorial
Stacking often considers heterogeneous weak learners, learns them in parallel and combines them by training a meta-model to output a prediction based on the different weak model predictions. On the other hand, boosting often considers homogeneous weak learners, learns them sequentially in a very adaptative way (a base model depends on the previous ones) and combines them following a deterministic strategy. In this video, you will explore Stacking and Boosting in Ensemble Learning Models. It is the tenth part of the Ensemble Learning Playlist. All 14 videos combined teaches Ensemble Learning in an In-Depth Manner. ✅Subscribe to our Channel to learn more about AI, ML and Data Science. InsideAIML’s Artificial Intelligence Masters Program provides training in the skills required for a career in AI. You will master Data Science, Deep Learning, TensorFlow, Machine Learning and other AI concepts. The course is designed by IITians and includes projects on advanced algorithms and artificial neural networks. Learn more at: https://insideaiml.com/courses For more updates on courses and tips follow us on: - Telegram: https://t.me/insideaiml - Instagram: https://www.instagram.com/inside_aiml/ - Twitter: https://twitter.com/insideaiml - LinkedIn: https://www.linkedin.com/company/insideaiml - Facebook: https://www.facebook.com/insideaimledu - Youtube: https://www.youtube.com/channel/UCz5qPOuMdz3oXv-gPO3h9Iw #MachineLearning #DataScience #DeepLearning #Python #AI #ArtificialIntelligence #Ensemble Learning #Stacking #Boosting
Friday, June 4, 2021
Ensemble Learning Part 8 | Bagging | Machine Learning Tutorial
Bagging is a way to decrease the variance in the prediction by generating additional data for training from the dataset using combinations with repetitions to produce multi-sets of the original data. In this video, you will explore one of the most approach in machine learning- Bagging (standing for “bootstrap aggregating”). It is the eigth part of the Ensemble Learning Playlist. All 14 videos combined teaches Ensemble Learning in an In-Depth Manner. ✅Subscribe to our Channel to learn more about AI, ML and Data Science. InsideAIML’s Artificial Intelligence Masters Program provides training in the skills required for a career in AI. You will master Data Science, Deep Learning, TensorFlow, Machine Learning and other AI concepts. The course is designed by IITians and includes projects on advanced algorithms and artificial neural networks. Learn more at: https://insideaiml.com/courses For more updates on courses and tips follow us on: - Telegram: https://t.me/insideaiml - Instagram: https://www.instagram.com/inside_aiml/ - Twitter: https://twitter.com/insideaiml - LinkedIn: https://www.linkedin.com/company/insideaiml - Facebook: https://www.facebook.com/insideaimledu - Youtube: https://www.youtube.com/channel/UCz5qPOuMdz3oXv-gPO3h9Iw #MachineLearning #DataScience #DeepLearning #Python #AI #ArtificialIntelligence #Ensemble Learning #Bagging
Ensemble Learning Part 9 | Random Forest Algorithm | Machine Learning Tutorial
The random subspace method is a technique used in order to introduce variation among the predictors in an ensemble model. This is done as decreasing the correlation between the predictors increases the performance of the ensemble model. These subsets are then used in order to train the predictors of an ensemble. In this video, you will explore Random Forest Algorithm. It is the ninth part of the Ensemble Learning Playlist. All 14 videos combined teaches Ensemble Learning in an In-Depth Manner. ✅Subscribe to our Channel to learn more about AI, ML and Data Science. InsideAIML’s Artificial Intelligence Masters Program provides training in the skills required for a career in AI. You will master Data Science, Deep Learning, TensorFlow, Machine Learning and other AI concepts. The course is designed by IITians and includes projects on advanced algorithms and artificial neural networks. Learn more at: https://insideaiml.com/courses For more updates on courses and tips follow us on: - Telegram: https://t.me/insideaiml - Instagram: https://www.instagram.com/inside_aiml/ - Twitter: https://twitter.com/insideaiml - LinkedIn: https://www.linkedin.com/company/insideaiml - Facebook: https://www.facebook.com/insideaimledu - Youtube: https://www.youtube.com/channel/UCz5qPOuMdz3oXv-gPO3h9Iw #MachineLearning #DataScience #DeepLearning #Python #AI #ArtificialIntelligence #Ensemble Learning #RandomForest
Thursday, June 3, 2021
Ensemble Learning Part 7 | Pruning & Weights | Decision Tree | Machine Learning Tutorial
Ensemble methods are a fantastic way to capitalise on the benefits of Decision Trees while reducing their tendency to overfit. In this video, you will discover the pruning of Decision Trees. It is the seventh part of the Ensemble Learning Playlist. All 14 videos combined teaches Ensemble Learning in an In-Depth Manner. ✅Subscribe to our Channel to learn more about AI, ML and Data Science. InsideAIML’s Artificial Intelligence Masters Program provides training in the skills required for a career in AI. You will master Data Science, Deep Learning, TensorFlow, Machine Learning and other AI concepts. The course is designed by IITians and includes projects on advanced algorithms and artificial neural networks. Learn more at: https://insideaiml.com/courses For more updates on courses and tips follow us on: - Telegram: https://t.me/insideaiml - Instagram: https://www.instagram.com/inside_aiml/ - Twitter: https://twitter.com/insideaiml - LinkedIn: https://www.linkedin.com/company/insideaiml - Facebook: https://www.facebook.com/insideaimledu - Youtube: https://www.youtube.com/channel/UCz5qPOuMdz3oXv-gPO3h9Iw #MachineLearning #DataScience #DeepLearning #Python #AI #ArtificialIntelligence #Ensemble Learning #DecisionTree #Pruning #Weights
AI vs Machine Learning vs Deep Learning vs Data Science in Bangla | Everything you need to know
This video on Artificial Intelligence vs Machine Learning vs Deep Learning video will help you to understand the differences between AI, ML and DL and how they are also related to each other. The tutorial video will also cover what Artificial Intelligence, Machine Learning and Deep Learning means as well as how they work with the help of examples.Below are the topics covered in this Tutorial: 00:00 - Intro 00:46 - Artificial Intelligence (AI) 01:46 - Machine Learning (ML) 02:28 - Supervised Machine Learning 03:21 - Unsupervised Machine Learning 04:20 - Reinforcement Machine 05:02 - Deep Learning (DL) 07:18 - Data Science (DS) More from The Data Enthusiast: Facebook:https://www.facebook.com/The-Data-Enthusiast-100583471967861 Instagram:https://www.instagram.com/walidhossain20/ Twitter:https://twitter.com/walidho90107116 LinkedIn:https://www.linkedin.com/in/walid-hossain-55ab17200/ Comment, like, share, and subscribe! We will be happy to hear from you and will get back to you!
Ensemble Learning Part 8 | Bagging | Machine Learning Tutorial
Bagging is a way to decrease the variance in the prediction by generating additional data for training from the dataset using combinations with repetitions to produce multi-sets of the original data. In this video, you will explore one of the most approach in machine learning- Bagging (standing for “bootstrap aggregating”). It is the eigth part of the Ensemble Learning Playlist. All 14 videos combined teaches Ensemble Learning in an In-Depth Manner. ✅Subscribe to our Channel to learn more about AI, ML and Data Science. InsideAIML’s Artificial Intelligence Masters Program provides training in the skills required for a career in AI. You will master Data Science, Deep Learning, TensorFlow, Machine Learning and other AI concepts. The course is designed by IITians and includes projects on advanced algorithms and artificial neural networks. Learn more at: https://insideaiml.com/courses For more updates on courses and tips follow us on: - Telegram: https://t.me/insideaiml - Instagram: https://www.instagram.com/inside_aiml/ - Twitter: https://twitter.com/insideaiml - LinkedIn: https://www.linkedin.com/company/insideaiml - Facebook: https://www.facebook.com/insideaimledu - Youtube: https://www.youtube.com/channel/UCz5qPOuMdz3oXv-gPO3h9Iw #MachineLearning #DataScience #DeepLearning #Python #AI #ArtificialIntelligence #Ensemble Learning #Bagging
Wednesday, June 2, 2021
Ensemble Learning Part 6 | Sample Scenario | Decision Tree | Machine Learning Tutorial
Ensemble methods are a fantastic way to capitalize on the benefits of Decision Trees while reducing their tendency to overfit. In this video, you will see a sample scenario and understand Decision Trees. It is the sixth part of the Ensemble Learning Playlist. All 14 videos combined teaches Ensemble Learning in an In-Depth Manner. ✅Subscribe to our Channel to learn more about AI, ML and Data Science. InsideAIML’s Artificial Intelligence Masters Program provides training in the skills required for a career in AI. You will master Data Science, Deep Learning, TensorFlow, Machine Learning and other AI concepts. The course is designed by IITians and includes projects on advanced algorithms and artificial neural networks. Learn more at: https://insideaiml.com/courses For more updates on courses and tips follow us on: - Telegram: https://t.me/insideaiml - Instagram: https://www.instagram.com/inside_aiml/ - Twitter: https://twitter.com/insideaiml - LinkedIn: https://www.linkedin.com/company/insideaiml - Facebook: https://www.facebook.com/insideaimledu - Youtube: https://www.youtube.com/channel/UCz5qPOuMdz3oXv-gPO3h9Iw #MachineLearning #DataScience #DeepLearning #Python #AI #ArtificialIntelligence #Ensemble Learning #DecisionTree #Sample
Ensemble Learning Part 7 | Pruning & Weights | Decision Tree | Machine Learning Tutorial
Ensemble methods are a fantastic way to capitalise on the benefits of Decision Trees while reducing their tendency to overfit. In this video, you will discover the pruning of Decision Trees. It is the seventh part of the Ensemble Learning Playlist. All 14 videos combined teaches Ensemble Learning in an In-Depth Manner. ✅Subscribe to our Channel to learn more about AI, ML and Data Science. InsideAIML’s Artificial Intelligence Masters Program provides training in the skills required for a career in AI. You will master Data Science, Deep Learning, TensorFlow, Machine Learning and other AI concepts. The course is designed by IITians and includes projects on advanced algorithms and artificial neural networks. Learn more at: https://insideaiml.com/courses For more updates on courses and tips follow us on: - Telegram: https://t.me/insideaiml - Instagram: https://www.instagram.com/inside_aiml/ - Twitter: https://twitter.com/insideaiml - LinkedIn: https://www.linkedin.com/company/insideaiml - Facebook: https://www.facebook.com/insideaimledu - Youtube: https://www.youtube.com/channel/UCz5qPOuMdz3oXv-gPO3h9Iw #MachineLearning #DataScience #DeepLearning #Python #AI #ArtificialIntelligence #Ensemble Learning #DecisionTree #Pruning #Weights
[ML News] Anthropic raises $124M, ML execs clueless, collusion rings, ELIZA source discovered & more
#mlnews #anthropic #eliza Anthropic raises $124M for steerable AI, peer review is threatened by collusion rings, and the original ELIZA source code was discovered. OUTLINE: 0:00 - Intro 0:40 - Anthropic raises $124M 3:25 - 65% of execs can't explain AI predictions 4:25 - DeepMind releases AndroidEnv 6:10 - Collusion rings in ML Conferences 7:30 - ELIZA's original source code discovered 10:45 - OpenAI raises $100M fund 11:25 - Outro References: https://ift.tt/2R55Qou https://ift.tt/3oWhheE https://ift.tt/2SAmUmL https://ift.tt/2S2wQp3 https://ift.tt/3i1LVBF https://ift.tt/2SBeBah https://ift.tt/2TjrbLC https://ift.tt/2TkPCs5 https://ift.tt/2RTzqxx https://ift.tt/3pbBuxb https://ift.tt/2RdtA9U https://ift.tt/34lxNv6 Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/2Zo6XRA BiliBili: https://ift.tt/3mfyjkW If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Tuesday, June 1, 2021
Can We Teach Physics To A Machine? ⚛
❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers 📝 The paper "Learning mesh-based simulation with Graph Networks" is available here: https://ift.tt/3qe5hoM 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://ift.tt/2icTBUb Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://ift.tt/2TnVBd3 Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m
Subscribe to:
Comments (Atom)
-
Using GPUs in TensorFlow, TensorBoard in notebooks, finding new datasets, & more! (#AskTensorFlow) [Collection] In a special live ep...
-
#minecraft #neuralnetwork #backpropagation I built an analog neural network in vanilla Minecraft without any mods or command blocks. The n...
-
Using More Data - Deep Learning with Neural Networks and TensorFlow part 8 [Collection] Welcome to part eight of the Deep Learning with ...
-
Visual scenes are often comprised of sets of independent objects. Yet, current vision models make no assumptions about the nature of the p...
-
❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers 📝 The paper "Alias-Free GAN" is available here: h...
-
Why are humans so good at video games? Maybe it's because a lot of games are designed with humans in mind. What happens if we change t...
-
#ai #attention #transformer #deeplearning Transformers are famous for two things: Their superior performance and their insane requirements...