Resource of free step by step video how to guides to get you started with machine learning.
Wednesday, September 29, 2021
This AI Stuntman Just Keeps Getting Better! 🏃
❤️ Train a neural network and track your experiments with Weights & Biases here: https://ift.tt/3tmB8pH 📝 The paper "Learning a family of motor skills from a single motion clip" is available here: https://ift.tt/3mfLV1M 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://ift.tt/2icTBUb Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://ift.tt/2TnVBd3 Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m #gamedev
[ML News] Plagiarism Case w/ Plot Twist | CLIP for video surveillance | OpenAI summarizes books
#plagiarism #surveillance #schmidhuber Your Mondaily updates of what's going in the world of Machine Learning. OUTLINE: 0:00 - Intro 0:20 - New plagiarism case has plot twist 7:25 - CLIP for video surveillance 9:40 - DARPA SubTerranean Challenge 11:00 - Schmidhuber criticizing Turing Lecture 15:00 - OpenAI summarizes books 17:55 - UnBiasIt monitors employees' communications for bias 20:00 - iOS plans to detect depression 21:30 - UK 10 year plan to become AI superpower 23:30 - Helpful Libraries 29:00 - WIT: Wikipedia Image-Text dataset References: New plagiarism case with plot twist https://ift.tt/3o8e2Cr https://ift.tt/3kQFHpq https://ift.tt/3upmJtm CLIP used for video surveillance https://ift.tt/2XALVRF https://ift.tt/3CwyCQK DARPA SubTerranean Challenge https://twitter.com/BotJunkie/status/1441225455856615424 https://twitter.com/BotJunkie https://ift.tt/39NHANj https://ift.tt/3CXXa5q https://twitter.com/dynamicrobots/status/1441481455830401028 Schmidhuber Blog: Turing Lecture Errors https://ift.tt/3lXBtLW OpenAI on Summarizing Books https://ift.tt/3AE5BC7 https://ift.tt/3CTdF2E UnBiasIt to monitor employee language https://ift.tt/3CT9IuU https://ift.tt/3umKAcU iPhone to detect depression https://ift.tt/3CwRn6N https://ift.tt/3ARWvlq UK 10-year plan to become AI-superpower https://ift.tt/39we3HT https://ift.tt/3m9rlQn Helpful Libraries https://twitter.com/scikit_learn/status/1441443534184275969 https://ift.tt/3unukIy https://twitter.com/pcastr/status/1441125505588084737 https://ift.tt/2NnkkbZ https://ift.tt/39C7PWM https://ift.tt/2Y1Ih2L https://ift.tt/3ul9rxT https://ift.tt/2DMLnKS https://ift.tt/3iirksi Habitat and Matterport 3D Dataset https://ift.tt/3ifNOtO https://aihabitat.org/ https://ift.tt/3kSuoNv WIT: Wikipedia-Based Image-Text Dataset https://ift.tt/39rRlRe Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/3qcgOFy BiliBili: https://ift.tt/3mfyjkW If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Monday, September 27, 2021
Inconsistency in Conference Peer Review: Revisiting the 2014 NeurIPS Experiment (Paper Explained)
#neurips #peerreview #nips The peer-review system at Machine Learning conferences has come under much criticism over the last years. One major driver was the infamous 2014 NeurIPS experiment, where a subset of papers were given to two different sets of reviewers. This experiment showed that only about half of all accepted papers were consistently accepted by both committees and demonstrated significant influence of subjectivity. This paper revisits the data from the 2014 experiment and traces the fate of accepted and rejected papers during the 7 years since, and analyzes how well reviewers can assess future impact, among other things. OUTLINE: 0:00 - Intro & Overview 1:20 - Recap: The 2014 NeurIPS Experiment 5:40 - How much of reviewing is subjective? 11:00 - Validation via simulation 15:45 - Can reviewers predict future impact? 23:10 - Discussion & Comments Paper: https://ift.tt/3EFUTgS Code: https://ift.tt/3iaCsYj Abstract: In this paper we revisit the 2014 NeurIPS experiment that examined inconsistency in conference peer review. We determine that 50% of the variation in reviewer quality scores was subjective in origin. Further, with seven years passing since the experiment we find that for accepted papers, there is no correlation between quality scores and impact of the paper as measured as a function of citation count. We trace the fate of rejected papers, recovering where these papers were eventually published. For these papers we find a correlation between quality scores and impact. We conclude that the reviewing process for the 2014 conference was good for identifying poor papers, but poor for identifying good papers. We give some suggestions for improving the reviewing process but also warn against removing the subjective element. Finally, we suggest that the real conclusion of the experiment is that the community should place less onus on the notion of top-tier conference publications when assessing the quality of individual researchers. For NeurIPS 2021, the PCs are repeating the experiment, as well as conducting new ones. Authors: Corinna Cortes, Neil D. Lawrence Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/3qcgOFy BiliBili: https://ift.tt/3mfyjkW If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Sunday, September 26, 2021
100K Subs AMA (Ask Me Anything)
Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/3qcgOFy BiliBili: https://ift.tt/3mfyjkW If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Saturday, September 25, 2021
NVIDIA’s New Technique: Beautiful Models For Less! 🌲
❤️ Check out Lambda here and sign up for their GPU Cloud: https://ift.tt/35NkCT7 📝 The paper "Appearance-Driven Automatic 3D Model Simplification" is available here: https://ift.tt/32azw5k 📝 The differentiable material synthesis paper is available here: https://ift.tt/3ff8Xnc 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://ift.tt/2icTBUb Thumbnail background image credit: https://ift.tt/3kFk37u Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://ift.tt/2TnVBd3 Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m #nvidia #gamedev
Friday, September 24, 2021
[ML News] New ImageNet SOTA | Uber's H3 hexagonal coordinate system | New text-image-pair dataset
#truthfulqa #efficientnet #laion400M Your regularly irregular updates on what's happening in the Machine Learning world. OUTLINE: 0:00 - Intro 0:20 - TruthfulQA benchmark shines new light on GPT-3 2:00 - LAION-400M image-text-pair dataset 4:10 - GoogleAI's EfficientNetV2 and CoAtNet 6:15 - Uber's H3: A hexagonal coordinate system 7:40 - AWS NeurIPS 2021 DeepRacer Challenge 8:15 - Helpful Libraries 9:20 - State of PyTorch in September 2021 10:05 - Physics-Based Deep Learning Book 10:35 - Music-conditioned 3D dance generation 11:40 - Stallman's take on legal issues with Codex 12:20 - Tensorflow DirectML on AMD GPUs 13:00 - Schmidhuber Blog: Turing Oversold References: TruthfulQA - A benchmark assessing truthfulness of language models https://ift.tt/3AkZouI LAION-400M image-text-pair dataset https://ift.tt/3lfBNW9 https://laion.ai/#top https://ift.tt/3EMsiql https://ift.tt/3COXqE1 GooleAI releases EfficientNetV2 and CoAtNet https://ift.tt/2YUysUQ Uber's H3 hexagonal coordinate systems https://ift.tt/2XRACVk NeurIPS 2021 DeepRacer Challenge https://ift.tt/3COmGu1 https://ift.tt/2FWkKqu https://ift.tt/39CQjSh Helpful Libraries https://ift.tt/3AfYW12 https://ift.tt/3lQMZZA https://ift.tt/2Xi7Dt7 https://ift.tt/2WS9W5P State of PyTorch in September 2021 https://ift.tt/3i4BSv6 Physics-Based Deep Learning Book https://ift.tt/3kCmeso https://ift.tt/3hFxWk7 Music Conditioned 3D dance generation https://ift.tt/2VDNayh Richard Stallman on Codex legal issues https://ift.tt/2Zc680s Tensorflow DirectML on AMD https://ift.tt/3Ch7h4Y Schmidhuber: Turing Oversold https://ift.tt/3Amm9yk Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/3qcgOFy BiliBili: https://ift.tt/3mfyjkW If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Tuesday, September 21, 2021
GPT-3 is a LIAR - Misinformation and fear-mongering around the TruthfulQA dataset
#gpt-3 #truth #conspiracy A new benchmark paper has created quite an uproar in the community. TruthfulQA is a dataset of 817 questions probing for imitative falsehoods where language models become less truthful, the larger they get. This surprising counter-intuitive finding validates many people's criticisms of large language models, but is it really the correct conclusion? OUTLINE: 0:00 - Intro 0:30 - Twitter Paper Announcement 4:10 - Large Language Models are to blame! 5:50 - How was the dataset constructed? 9:25 - The questions are adversarial 12:30 - Are you surprised?! Paper: https://ift.tt/3AukcAa Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/3qcgOFy BiliBili: https://ift.tt/3mfyjkW If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
The Tale Of The Unscrewable Bolt! 🔩
❤️ Check out Weights & Biases and sign up for a free demo here: https://ift.tt/2S5tXnb ❤️ Their mentioned post is available here: https://ift.tt/3rJDUDL 📝 The paper "Intersection-free Rigid Body Dynamics" is available here: https://ift.tt/33nRxOr Scene credits: - Bolt - YSoft be3D - Expanding Lock Box - Angus Deveson - Bike Chain and Sprocket - Okan (bike chain), Hampus Andersson (sprocket) 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://ift.tt/2icTBUb Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://ift.tt/2TnVBd3 Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m
Monday, September 20, 2021
Topographic VAEs learn Equivariant Capsules (Machine Learning Research Paper Explained)
#tvae #topographic #equivariant Variational Autoencoders model the latent space as a set of independent Gaussian random variables, which the decoder maps to a data distribution. However, this independence is not always desired, for example when dealing with video sequences, we know that successive frames are heavily correlated. Thus, any latent space dealing with such data should reflect this in its structure. Topographic VAEs are a framework for defining correlation structures among the latent variables and induce equivariance within the resulting model. This paper shows how such correlation structures can be built by correctly arranging higher-level variables, which are themselves independent Gaussians. OUTLINE: 0:00 - Intro 1:40 - Architecture Overview 6:30 - Comparison to regular VAEs 8:35 - Generative Mechanism Formulation 11:45 - Non-Gaussian Latent Space 17:30 - Topographic Product of Student-t 21:15 - Introducing Temporal Coherence 24:50 - Topographic VAE 27:50 - Experimental Results 31:15 - Conclusion & Comments Paper: https://ift.tt/3tXsw93 Code: https://ift.tt/3Cv1XeC Abstract: In this work we seek to bridge the concepts of topographic organization and equivariance in neural networks. To accomplish this, we introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables. We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST. Furthermore, through topographic organization over time (i.e. temporal coherence), we demonstrate how predefined latent space transformation operators can be encouraged for observed transformed input sequences -- a primitive form of unsupervised learned equivariance. We demonstrate that this model successfully learns sets of approximately equivariant features (i.e. "capsules") directly from sequences and achieves higher likelihood on correspondingly transforming test sequences. Equivariance is verified quantitatively by measuring the approximate commutativity of the inference network and the sequence transformations. Finally, we demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks. Authors: T. Anderson Keller, Max Welling Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/3qcgOFy BiliBili: https://ift.tt/3mfyjkW If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Saturday, September 18, 2021
This AI Makes Digital Copies of Humans! 👤
❤️ Check out the Gradient Dissent podcast by Weights & Biases: http://wandb.me/gd 📝 The paper "The Relightables: Volumetric Performance Capture of Humans with Realistic Relighting" is available here: https://ift.tt/2O0xBtU 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://ift.tt/2icTBUb Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m #vr
Thursday, September 16, 2021
[ML News] Roomba Avoids Poop | Textless NLP | TikTok Algorithm Secrets | New Schmidhuber Blog
#schmidhuber #tiktok #roomba Your regularly irregluar update on what's happening in the world of Machine Learning. OUTLINE: 0:00 - Intro 0:15 - Sponsor: Weights & Biases 1:55 - ML YouTuber reaches 100k subscribers 2:40 - Facebook AI pushes Textless NLP 5:30 - Schmidhuber blog post: I invented everything 7:55 - TikTok algorithm rabbitholes users 10:45 - Roomba learns to avoid poop 11:50 - AI can spot art forgeries 14:55 - Deepmind's plans to separate from Google 16:15 - Cohere raises 40M 16:55 - US Judge rejects AI inventor on patent 17:55 - Altman: GPT-4 not much bigger than GPT-3 18:45 - Salesforce CodeT5 19:45 - DeepMind Reinforcement Learning Lecture Series 20:15 - WikiGraphs Dataset 20:40 - LiveCell Dataset 21:00 - SpeechBrain 21:10 - AI-generated influencer gains 100 sponsorships 22:20 - AI News Questions 23:15 - AI hiring tools reject millions of valid applicants Sponsor: Weights & Biases https://wandb.me/start References: Facebook AI creates Textless NLP https://ift.tt/38SQk46 https://ift.tt/2XpXYkh Schmidhuber invented everything https://ift.tt/3AiPWZ0 How TikTok's algorithm works https://ift.tt/3BBupMj Roomba learns to avoid poop https://ift.tt/3BZcQVK Amateur develops fake art detector https://ift.tt/3kgbcsT https://ift.tt/387aOpA DeepMind's plan to break away from Google https://ift.tt/3zaThIc https://ift.tt/3keqMVG Cohere raises USD 40M https://ift.tt/3yRDKwL https://cohere.ai/ US judge refuses AI patent https://ift.tt/3zKbBc5 Sam Altman on GPT-4 https://ift.tt/3AhtrUn Salesforce releases CodeT5 https://ift.tt/3tccMyF DeepMind RL lecture series https://ift.tt/3C01wIP WikiGraphs Dataset https://ift.tt/3nDC3B3 LiveCell Dataset https://ift.tt/3Ez9QS6 https://ift.tt/2WDknu7 SpeechBrain Library https://ift.tt/305SzPL AI generated influencer lands 100 sponsorships https://ift.tt/3jYbuEm AI News Questions https://ift.tt/3k0ZC4L https://ift.tt/2XdjZ61 https://ift.tt/3haEkzP https://ift.tt/3jQevXy https://ift.tt/3k1qD85 https://ift.tt/3hl0Y8o https://ift.tt/38LuAr9 https://ift.tt/2X3K0EZ AI hiring tools mistakenly reject millions of applicants https://ift.tt/3DQmlIm Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/3qcgOFy BiliBili: https://ift.tt/3mfyjkW If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Wednesday, September 15, 2021
Meet Your Virtual Level Designer! 🎮
❤️ Check out Weights & Biases and sign up for a free demo here: https://ift.tt/2S5tXnb ❤️ Their mentioned post is available here: https://ift.tt/3l0yG3Q 📝 The paper "Adversarial Reinforcement Learning for Procedural Content Generation" is available here: https://ift.tt/3hxoZcM 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://ift.tt/2icTBUb Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m #gamedev
Tuesday, September 14, 2021
Celebrating 100k Subscribers! (w/ Channel Statistics)
#yannickilcher #machinelearning #100k OUTLINE: 0:00 - 100k! 1:00 - Announcements & Thanks 3:55 - Channel Statistics Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/2Zo6XRA BiliBili: https://ift.tt/3mfyjkW If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Saturday, September 11, 2021
OpenAI Codex: Just Say What You Want! 🤖
❤️ Check out Perceptilabs and sign up for a free demo here: https://ift.tt/2WIdXXn 📝 The paper "Evaluating Large Language Models Trained on Code" is available here: https://ift.tt/3fMFQas Codex tweet/application links: Explaining code: https://twitter.com/CristiVlad25/status/1432017112885833734 Pong game: https://twitter.com/slava__bobrov/status/1425904829013102602 Blender Scripting: https://www.youtube.com/watch?v=MvHbrVfEuyk GPT-3 tweet/application links: Website layout: https://twitter.com/sharifshameem/status/1283322990625607681 Plots: https://twitter.com/aquariusacquah/status/1285415144017797126?s=12 Typesetting math: https://twitter.com/sh_reya/status/1284746918959239168 Population data: https://twitter.com/pavtalk/status/1285410751092416513 Legalese: https://twitter.com/f_j_j_/status/1283848393832333313 Nutrition labels: https://twitter.com/lawderpaul/status/1284972517749338112 User interface design: https://twitter.com/jsngr/status/1284511080715362304 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://ift.tt/2icTBUb Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://ift.tt/2TnVBd3 Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m #openai #codex
Friday, September 10, 2021
[ML News] AI predicts race from X-Ray | Google kills HealthStreams | Boosting Search with MuZero
#mlnews #schmidhuber #muzero Your regular updates on what's happening in the ML world! OUTLINE: 0:00 - Intro 0:15 - Sponsor: Weights & Biases 1:45 - Google shuts down health streams 4:25 - AI predicts race from blurry X-Rays 7:35 - Facebook labels black men as primates 11:05 - Distill papers on Graph Neural Networks 11:50 - Jürgen Schmidhuber to lead KAUST AI Initiative 12:35 - GitHub brief on DMCA notices for source code 14:55 - Helpful Reddit Threads 19:40 - Simple Tricks to improve Transformers 20:40 - Apple's Unconstrained Scene Generation 21:40 - Common Objects in 3D dataset 22:20 - WarpDrive Multi-Agent RL framework 23:10 - My new paper: Boosting Search Agents & MuZero 25:15 - Can AI detect depression from speech? References: Google shuts down Health Streams https://ift.tt/3Bf5EEx AI predicts race from X-Rays https://ift.tt/3niDrJe https://ift.tt/2X9Uwd7 Facebook labels black men as primates https://ift.tt/3h17AIY https://ift.tt/3p146ZI Distill articles on GNNs https://ift.tt/3yLNYyQ https://ift.tt/3yIWNJo Jürgen Schmidhuber leads KAUST AI initiative https://ift.tt/3A1fz05 GitHub issues court brief on code DMCAs https://ift.tt/3gPB9x1 Useful Reddit Threads https://ift.tt/2Ynhqyp https://ift.tt/2Y7PaQm https://ift.tt/3kZKN1E https://ift.tt/3zvgLZn Tricks to improve Transformers https://ift.tt/3CeQ4cH Unconstrained Scene Generation https://ift.tt/3BZEotV Common Objects in 3D dataset https://ift.tt/3BUNbNG WarpDrive Multi-Agent RL framework https://ift.tt/2WQjA9g Boosting Search Engines / MuZero Code https://ift.tt/2Xbzmva https://ift.tt/3lfWiBR https://ift.tt/3A5PlJZ Can AI detect depression? https://ift.tt/3tyfEWB Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/2Zo6XRA BiliBili: https://ift.tt/3mfyjkW If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Wednesday, September 8, 2021
Watch Tesla’s Self-Driving Car Learn In a Simulation! 🚘
❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://ift.tt/2icTBUb Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://ift.tt/2TnVBd3 Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m #tesla
Monday, September 6, 2021
∞-former: Infinite Memory Transformer (aka Infty-Former / Infinity-Former, Research Paper Explained)
#inftyformer #infinityformer #transformer Vanilla Transformers are excellent sequence models, but suffer from very harsch constraints on the length of the sequences they can process. Several attempts have been made to extend the Transformer's sequence length, but few have successfully gone beyond a constant factor improvement. This paper presents a method, based on continuous attention mechanisms, to attend to an unbounded past sequence by representing the past as a continuous signal, rather than a sequence. This enables the Infty-Former to effectively enrich the current context with global information, which increases performance on long-range dependencies in sequence tasks. Further, the paper presents the concept of sticky memories, which highlight past events that are of particular importance and elevates their representation in the long-term memory. OUTLINE: 0:00 - Intro & Overview 1:10 - Sponsor Spot: Weights & Biases 3:35 - Problem Statement 8:00 - Continuous Attention Mechanism 16:25 - Unbounded Memory via concatenation & contraction 18:05 - Does this make sense? 20:25 - How the Long-Term Memory is used in an attention layer 27:40 - Entire Architecture Recap 29:30 - Sticky Memories by Importance Sampling 31:25 - Commentary: Pros and cons of using heuristics 32:30 - Experiments & Results Paper: https://ift.tt/3DTrv6E Sponsor: Weights & Biases https://wandb.me/start Abstract: Transformers struggle when attending to long contexts, since the amount of computation grows with the context length, and therefore they cannot model long-term memories effectively. Several variations have been proposed to alleviate this problem, but they all have a finite memory capacity, being forced to drop old information. In this paper, we propose the ∞-former, which extends the vanilla transformer with an unbounded long-term memory. By making use of a continuous-space attention mechanism to attend over the long-term memory, the ∞-former's attention complexity becomes independent of the context length. Thus, it is able to model arbitrarily long contexts and maintain "sticky memories" while keeping a fixed computation budget. Experiments on a synthetic sorting task demonstrate the ability of the ∞-former to retain information from long sequences. We also perform experiments on language modeling, by training a model from scratch and by fine-tuning a pre-trained language model, which show benefits of unbounded long-term memories. Authors: Pedro Henrique Martins, Zita Marinho, André F. T. Martins Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/2Zo6XRA BiliBili: https://ift.tt/3mfyjkW If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Saturday, September 4, 2021
This AI Creates Virtual Fingers! 🤝
❤️ Check out Lambda here and sign up for their GPU Cloud: https://ift.tt/35NkCT7 📝 The paper "ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation" is available here: https://ift.tt/3DX4EXZ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://ift.tt/2icTBUb - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://ift.tt/2icTBUb Thumbnail background image credit: https://ift.tt/3BFU1Xf Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m #vr
Friday, September 3, 2021
[ML News] Blind Chess AI Competition | Graph NNs for traffic | AI gift suggestions
#mlnews #chess #neurips OUTLINE: 0:00 - Intro 0:30 - Reconnaissance Blind Chess NeurIPS 2021 Competition 3:40 - Colab Pro no longer top priority for GPUs 4:45 - DeepMind uses Graph NNs to do traffic prediction 6:00 - Helpful Libraries: Isaac Gym, Differentiable Human, LVIS, BEHAVIOR 10:25 - Cerebras Wafer Scale Engine Cluster 12:15 - AI Voice Synthesis for Val Kilmer 14:20 - Can AI give thoughtful gifts? References: Reconnaissance Blind Chess NeurIPS 2021 Competition https://rbc.jhuapl.edu/ https://ift.tt/3h03il6 Colab Pro no longer top priority https://ift.tt/3BjuUte Google Maps ETA prediction using Graph Neural Networks https://ift.tt/38EuBNy Isaac Gym: RL simulator on GPU https://ift.tt/2WPSpLO https://ift.tt/3jelk4T https://ift.tt/2HRYIa6 Cerebras Cluster for massive AI models https://ift.tt/3BDiUmz Helpful Libraries / Datasets https://ift.tt/3BFLcNq https://ift.tt/2ySm9Ig https://ift.tt/3yCoX96 AI Voice Reconstruction https://ift.tt/37QrNfG Can AI make thoughtful gifts? https://ift.tt/3jpd0zd Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/2Zo6XRA BiliBili: https://ift.tt/3mfyjkW If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Thursday, September 2, 2021
ALiBi - Train Short, Test Long: Attention with linear biases enables input length extrapolation
#alibi #transformers #attention Transformers are essentially set models that need additional inputs to make sense of sequence data. The most widespread additional inputs are position encodings or position embeddings, which add sequence index information in various forms. However, this has put a limit on the resulting model, which cannot run inference on sequences longer than it has been trained on, as it would encounter unfamiliar position encodings. ALiBi solves this by proposing simple linear fixed biases as position information, adding negligible overhead in time and memory, but surprisingly, the resulting model is able to handle inference on sequences many times as long as its training sequences. OUTLINE: 0:00 - Intro & Overview 1:40 - Position Encodings in Transformers 4:55 - Sinusoidial Position Encodings 11:50 - ALiBi Position Encodings 20:50 - How to choose the slope parameter 23:55 - Experimental Results 29:10 - Comments & Conclusion Paper: https://ift.tt/3kq3on3 Code: https://ift.tt/3mXPVFZ Abstract: Since the introduction of the transformer model by Vaswani et al. (2017), a fundamental question remains open: how to achieve extrapolation at inference time to longer sequences than seen during training? We first show that extrapolation can be improved by changing the position representation method, though we find that existing proposals do not allow efficient extrapolation. We introduce a simple and efficient method, Attention with Linear Biases (ALiBi), that allows for extrapolation. ALiBi does not add positional embeddings to the word embeddings; instead, it biases the query-key attention scores with a term that is proportional to their distance. We show that this method allows training a 1.3 billion parameter model on input sequences of length 1024 that extrapolates to input sequences of length 2048, achieving the same perplexity as a sinusoidal position embedding model trained on inputs of length 2048, 11% faster and using 11% less memory. ALiBi’s inductive bias towards recency allows it to outperform multiple strong position methods on the WikiText-103 benchmark. Finally, we provide analysis of ALiBi to understand why it leads to better performance. Authors: Ofir Press, Noah A. Smith, Mike Lewis Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/2Zo6XRA BiliBili: https://ift.tt/3mfyjkW If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
This AI Helps Making A Music Video! 💃
❤️ Train a neural network and track your experiments with Weights & Biases here: https://ift.tt/3tmB8pH 📝 The paper Editable Free-Viewpoint Video using a Layered Neural Representation"" is available here: https://ift.tt/2WMctye https://ift.tt/3kKUVew 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://ift.tt/2icTBUb Thumbnail background image credit: https://ift.tt/3kQJvFQ Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m
Subscribe to:
Posts (Atom)
-
Using GPUs in TensorFlow, TensorBoard in notebooks, finding new datasets, & more! (#AskTensorFlow) [Collection] In a special live ep...
-
JavaやC++で作成された具体的なルールに従って動く従来のプログラムと違い、機械学習はデータからルール自体を推測するシステムです。機械学習は具体的にどのようなコードで構成されているでしょうか? 機械学習ゼロからヒーローへの第一部ではそのような疑問に応えるため、ガイドのチャー...
-
#deeplearning #noether #symmetries This video includes an interview with first author Ferran Alet! Encoding inductive biases has been a lo...
-
How to Do PS2 Filter (Tiktok PS2 Filter Tutorial), AI tiktok filter Create your own PS2 Filter photos with this simple guide! 🎮📸 Please...
-
#ai #attention #transformer #deeplearning Transformers are famous for two things: Their superior performance and their insane requirements...
-
K Nearest Neighbors Application - Practical Machine Learning Tutorial with Python p.14 [Collection] In the last part we introduced Class...
-
Machine Learning in Python using Visual Studio | Getting Started Python is a popular programming language. It was created by Guido van Ross...
-
We Talked To Sophia — The AI Robot That Once Said It Would 'Destroy Humans' [Collection] This AI robot once said it wanted to de...
-
Programming R Squared - Practical Machine Learning Tutorial with Python p.11 [Collection] Now that we know what we're looking for, l...
-
#minecraft #neuralnetwork #backpropagation I built an analog neural network in vanilla Minecraft without any mods or command blocks. The n...