Wednesday, September 30, 2020

Real-time semantic segmentation in the browser - Made With TensorFlow.js


Our 2nd episode of Made With TensorFlow.js heads to Brazil to join Hugo Zanini, a Python developer who was looking to use the latest cutting edge research from the TensorFlow community in the browser using JavaScript. Join us as Hugo takes us through his learning experiences in using SavedModels in an efficient way in JavaScript directly enabling you to get the reach and scale of the web for your new research. Hosted by Jason Mayes, Developer Advocate for TensorFlow.js. Real-time semantic segmentation in the browser → https://goo.gle/32CvIel Watch more episodes of Made With TensorFlow.js → http://goo.gle/made-with-tfjs Subscribe to TensorFlow Youtube Channel → https://goo.gle/TensorFlow #TensorFlow #TensorFlowJS #MadeWithTFJS #JavaScript #CreativeCoding #WebDev #SemanticSegmentation #Segmentation #ImageSegmentation

Tuesday, September 29, 2020

AI-Based Style Transfer For Video…Now in Real Time!


❤️ Check out Weights & Biases and sign up for a free demo here: https://ift.tt/2YuG7Yf ❤️ Their mentioned post is available here: https://ift.tt/2Sfn9QW 📝 The paper "Interactive Video Stylization Using Few-Shot Patch-Based Training" is available here: https://ift.tt/2L1YknE 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://ift.tt/2icTBUb Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m

Retropropagació - Xavier Giró - UPC ESEIAAT Terrassa 2020


L'objectiu de l'assignatura és el desenvolupament de xarxes neuronals profundes que permetin resoldre problemes d’intel·ligència artificial. Aquestes eines d’aprenentatge automàtic estimen els seus paràmetres a partir d’unes dades d’entrenament i un criteri d’optimització. L’assignatura presenta els tipus de capes més utilitzades en aquestes xarxes, així com els algoritmes i metodologies d’optimització més populars. Els estudiants seran capaços implementar-les en programari, així com monitoritzar el seu entrenament i diagnosticar quines accions poden millorar-ne el funcionament. El curs se centra en aplicacacions de xarxes neuronals profundes relacionades amb la gestió i distribució de senyals audiovisuals.

Sunday, September 27, 2020

El Perceptró (Part 2) - Xavier Giró - UPC ESEIAAT Terrassa 2020


L'objectiu de l'assignatura és el desenvolupament de xarxes neuronals profundes que permetin resoldre problemes d’intel·ligència artificial. Aquestes eines d’aprenentatge automàtic estimen els seus paràmetres a partir d’unes dades d’entrenament i un criteri d’optimització. L’assignatura presenta els tipus de capes més utilitzades en aquestes xarxes, així com els algoritmes i metodologies d’optimització més populars. Els estudiants seran capaços implementar-les en programari, així com monitoritzar el seu entrenament i diagnosticar quines accions poden millorar-ne el funcionament. El curs se centra en aplicacacions de xarxes neuronals profundes relacionades amb la gestió i distribució de senyals audiovisuals.

Saturday, September 26, 2020

Elon Musk’s Neuralink Puts An AI Into Your Brain! 🧠


❤️ Check out Weights & Biases and sign up for a free demo here: https://ift.tt/2YuG7Yf ❤️ Their mentioned post is available here: https://ift.tt/3cOAckP 📝 The paper "An integrated brain-machine interface platform with thousands of channels" is available here: https://ift.tt/2Y08DkZ Neuralink is hiring! Apply here: https://ift.tt/32F1TKF 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://ift.tt/2icTBUb Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m

Friday, September 25, 2020

NNFS Update #2: Content done


More info: https://nnfs.io Channel membership: https://www.youtube.com/channel/UCfzlCWGWYyIQ0aLC5w48gBQ/join Discord: https://ift.tt/2AZiVqD Support the content: https://ift.tt/2qsKFOO Twitter: https://twitter.com/sentdex Instagram: https://ift.tt/2J4Oa4h Facebook: https://ift.tt/1OI3cwB Twitch: https://ift.tt/2pcWGaq

El Perceptró (Part 1) - UPC ESEIAAT Terrassa 2020


L'objectiu de l'assignatura és el desenvolupament de xarxes neuronals profundes que permetin resoldre problemes d’intel·ligència artificial. Aquestes eines d’aprenentatge automàtic estimen els seus paràmetres a partir d’unes dades d’entrenament i un criteri d’optimització. L’assignatura presenta els tipus de capes més utilitzades en aquestes xarxes, així com els algoritmes i metodologies d’optimització més populars. Els estudiants seran capaços implementar-les en programari, així com monitoritzar el seu entrenament i diagnosticar quines accions poden millorar-ne el funcionament. El curs se centra en aplicacacions de xarxes neuronals profundes relacionades amb la gestió i distribució de senyals audiovisuals.

Wednesday, September 23, 2020

Enjoying the show - Gant Laborde - Made With TensorFlow.js


Welcome to the 1st episode of Made With TensorFlow.js by Jason Mayes, Developer Advocate for TensorFlow.js. Today, we’re joined by Gant Laborde from the #MadeWithTFJS community who explains how he solved a problem when presenting digitally to an audience where he was unable to know if they were interested in the content being presented. Learn how Gant created an innovative, real-time, and scalable system to better understand his audience using machine learning in the browser using TensorFlow.js. Enjoying the Show → https://goo.gle/2G2aWMe Watch more episodes of Made With TensorFlow.js → http://goo.gle/made-with-tfjs Subscribe to TensorFlow Youtube Channel → https://goo.gle/TensorFlow #TensorFlow #TensorFlowJS #MadeWithTFJS #JavaScript #CreativeCoding #WebDev

Tuesday, September 22, 2020

This AI Creates Real Scenes From Your Photos! 📷


❤️ Check out Weights & Biases and sign up for a free demo here: https://ift.tt/2YuG7Yf ❤️ Their mentioned post is available here: https://ift.tt/2wthYVQ 📝 The paper "NeRF in the Wild - Neural Radiance Fields for Unconstrained Photo Collections" is available here: https://ift.tt/33y88QL 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://ift.tt/2icTBUb Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m

Monday, September 21, 2020

Backpropagation - Xavier Giro - UPC TelecomBCN Barcelona 2020


https://ift.tt/3iQn41h Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.

Multi-layer Perceptrons MLP - Xavier Giro - UPC TelecomBCN Barcelona 2020


https://ift.tt/3iQn41h Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.

Softmax Regression - Xavier Giro - UPC TelecomBCN Barcelona 2020


https://ift.tt/3iQn41h Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.

The Perceptron - Xavier Giro - UPC TelecomBCN - Barcelona 2020


https://ift.tt/3iQn41h Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.

Sunday, September 20, 2020

Machine Learning Basics - Xavier Giró - UPC TelecomBCN Barcelona 2020


https://ift.tt/3iQn41h Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.

Aprenentatge Automàtic Bàsic - Xavier Giró - UPC ESEIAAT Terrassa 2020


https://ift.tt/32OODCw L'objectiu de l'assignatura és el desenvolupament de xarxes neuronals profundes que permetin resoldre problemes d’intel·ligència artificial. Aquestes eines d’aprenentatge automàtic estimen els seus paràmetres a partir d’unes dades d’entrenament i un criteri d’optimització. L’assignatura presenta els tipus de capes més utilitzades en aquestes xarxes, així com els algoritmes i metodologies d’optimització més populars. Els estudiants seran capaços implementar-les en programari, així com monitoritzar el seu entrenament i diagnosticar quines accions poden millorar-ne el funcionament. El curs se centra en aplicacacions de xarxes neuronals profundes relacionades amb la gestió i distribució de senyals audiovisuals.

Saturday, September 19, 2020

AI Makes Video Game After Watching Tennis Matches!


❤️ Check out Lambda here and sign up for their GPU Cloud: https://ift.tt/35NkCT7 📝 The paper "Vid2Player: Controllable Video Sprites that Behave and Appear like Professional Tennis Players" is available here: https://ift.tt/2DLeGRx ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://ift.tt/2icTBUb - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://ift.tt/2icTBUb Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m

Friday, September 18, 2020

The Hardware Lottery (Paper Explained)


#ai #research #hardware We like to think that ideas in research succeed because of their merit, but this story is likely incomplete. The term "hardware lottery" describes the fact that certain algorithmic ideas are successful because they happen to be suited well to the prevalent hardware, whereas other ideas, which would be equally viable, are left behind because no accelerators for them exists. This paper is part history, part opinion and gives lots of inputs to think about. OUTLINE: 0:00 - Intro & Overview 1:15 - The Hardware Lottery 8:30 - Sections Overview 11:30 - Why ML researchers are disconnected from hardware 16:50 - Historic Examples of Hardware Lotteries 29:05 - Are we in a Hardware Lottery right now? 39:55 - GPT-3 as an Example 43:40 - Comparing Scaling Neural Networks to Human Brains 46:00 - The Way Forward 49:25 - Conclusion & Comments Paper: https://ift.tt/3cagbW9 Website: https://ift.tt/32GSr8T Abstract: Hardware, systems and algorithms research communities have historically had different incentive structures and fluctuating motivation to engage with each other explicitly. This historical treatment is odd given that hardware and software have frequently determined which research ideas succeed (and fail). This essay introduces the term hardware lottery to describe when a research idea wins because it is suited to the available software and hardware and not because the idea is superior to alternative research directions. Examples from early computer science history illustrate how hardware lotteries can delay research progress by casting successful ideas as failures. These lessons are particularly salient given the advent of domain specialized hardware which makes it increasingly costly to stray off of the beaten path of research ideas. Authors: Sara Hooker Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/2Zo6XRA If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

TensorFlow.js Community "Show & Tell"


8 new demos from the #MadeWithTFJS global community pushing the boundaries of on device machine learning in JavaScript. Enjoying The Show by Gant Laborde → https://goo.gle/2RAZcmC Real-time Semantic Segmentation in the Browser by Hugo Zanini → https://goo.gle/32CvIel WashOS by Charlie Gerard → https://goo.gle/32HDHHb Splat by Charlie Gerard → https://goo.gle/2ZGc1QQ AIDEN physio assistant by Shivay Lamba Demo →https://goo.gle/32DepJW GitHub → https://goo.gle/3mvxy8y Touch - Less by Anders Jessen / Hello Monday → https://goo.gle/35JUVW0 Twitter Sentiment Analysis by Benson Ruan → https://goo.gle/2E7i9di yogAI by Cristina Maillo → https://goo.gle/3iEURKO Attomoto by James Seo → https://goo.gle/2FGgJaj TensorFlow.js Community Show & Tell → http://goo.gle/tf-show-and-tell Subscribe to the TensorFlow channel → https://goo.gle/TensorFlow

Tuesday, September 15, 2020

Can An AI Generate Original Art? 👨‍🎨


❤️ Check out Weights & Biases and sign up for a free demo here: https://ift.tt/2YuG7Yf ❤️ Their report on this paper is available here: https://ift.tt/3klMjJc 📝 The paper "Rewriting a Deep Generative Model" is available here: https://ift.tt/3hzblmn Read the instructions carefully and try it here: https://ift.tt/3ixLSeo 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://ift.tt/2icTBUb Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m

Sunday, September 13, 2020

Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)


#ai #chess #alphazero Chess is a very old game and both its rules and theory have evolved over thousands of years in the collective effort of millions of humans. Therefore, it is almost impossible to predict the effect of even minor changes to the game rules, because this collective process cannot be easily replicated. This paper proposes to use AlphaZero's ability to achieve superhuman performance in board games within one day of training to assess the effect of a series of small, but consequential rule changes. It analyzes the resulting strategies and sets the stage for broader applications of reinforcement learning to study rule-based systems. OUTLINE: 0:00 - Intro & Overview 2:30 - Alternate Chess Rules 4:20 - Using AlphaZero to assess rule change outcomes 6:00 - How AlphaZero works 16:40 - Alternate Chess Rules continued 18:50 - Game outcome distributions 31:45 - e4 and Nf3 in classic vs no-castling chess 36:40 - Conclusions & comments Paper: https://ift.tt/32fTRY0 My Video on AI Economist: https://youtu.be/F5aaXrIMWyU Abstract: It is non-trivial to design engaging and balanced sets of game rules. Modern chess has evolved over centuries, but without a similar recourse to history, the consequences of rule changes to game dynamics are difficult to predict. AlphaZero provides an alternative in silico means of game balance assessment. It is a system that can learn near-optimal strategies for any rule set from scratch, without any human supervision, by continually learning from its own experience. In this study we use AlphaZero to creatively explore and design new chess variants. There is growing interest in chess variants like Fischer Random Chess, because of classical chess's voluminous opening theory, the high percentage of draws in professional play, and the non-negligible number of games that end while both players are still in their home preparation. We compare nine other variants that involve atomic changes to the rules of chess. The changes allow for novel strategic and tactical patterns to emerge, while keeping the games close to the original. By learning near-optimal strategies for each variant with AlphaZero, we determine what games between strong human players might look like if these variants were adopted. Qualitatively, several variants are very dynamic. An analytic comparison show that pieces are valued differently between variants, and that some variants are more decisive than classical chess. Our findings demonstrate the rich possibilities that lie beyond the rules of modern chess. Authors: Nenad Tomašev, Ulrich Paquet, Demis Hassabis, Vladimir Kramnik Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/2Zo6XRA If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

Saturday, September 12, 2020

Simulating a Rocket Launch! 🚀


❤️ Check out Lambda here and sign up for their GPU Cloud: https://ift.tt/35NkCT7 📝 The paper "Fast and Scalable Turbulent Flow Simulation with Two-Way Coupling" is available here: https://ift.tt/3k9Tggw Vishnu Menon’s wind tunnel test video: https://www.youtube.com/watch?v=_q6ozALzkF4 ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://ift.tt/2icTBUb - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://ift.tt/2icTBUb Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m

Tuesday, September 8, 2020

Freestyle Rap "3:20" by Baba Brinkman (Quantum Summer Symposium 2020)


Vancouver-born, New York-based rap artist and science communicator Baba Brinkman (https://ift.tt/33KfnSe) performs an improvised freestyle rap inspired by comments in the YouTube Live chat stream at the end of Day 2 of Google's Quantum Summer Symposium (July 23, 2020). Check out the playlist for more videos from QSS 2020. Google's Quantum Summer Symposium 2020 playlist → https://goo.gle/2Z149sN Subscribe to TensorFlow → https://goo.gle/TensorFlow

Day 2 Rap Up by Baba Brinkman (Quantum Summer Symposium 2020)


Vancouver-born, New York-based rap artist and science communicator Baba Brinkman (https://ift.tt/33KfnSe) performs a Rap Up of Day 2 of Google's Quantum Summer Symposium (July 23, 2020). This rap was spontaneously composed from content presented during the conference. Check out the playlist for more videos from QSS 2020. Google's Quantum Summer Symposium 2020 playlist → https://goo.gle/2Z149sN Subscribe to TensorFlow → https://goo.gle/TensorFlow

This AI Creates Human Faces From Your Sketches!


❤️ Check out Weights & Biases and sign up for a free demo here: https://ift.tt/2YuG7Yf ❤️ Their instrumentation of a previous paper is available here: https://ift.tt/3g17wXR 📝 The paper "DeepFaceDrawing: Deep Generation of Face Images from Sketches" is available here: https://ift.tt/3cywdaS Alternative paper link if it is down: https://ift.tt/35grwCz Our earlier video on sketch tutorials is available here: https://www.youtube.com/watch?v=brs1qCDzRdk 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://ift.tt/2icTBUb Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m

Monday, September 7, 2020

Learning to summarize from human feedback (Paper Explained)


#summarization #gpt3 #openai Text Summarization is a hard task, both in training and evaluation. Training is usually done maximizing the log-likelihood of a human-generated reference summary, while evaluation is performed using overlap-based metrics like ROUGE. Both significantly undervalue the breadth and intricacies of language and the nature of the information contained in text summaries. This paper by OpenAI includes direct human feedback both in evaluation and - via reward model proxies - in training. The final model even outperforms single humans when judged by other humans and is an interesting application of using reinforcement learning together with humans in the loop. OUTLINE: 0:00 - Intro & Overview 5:35 - Summarization as a Task 7:30 - Problems with the ROUGE Metric 10:10 - Training Supervised Models 12:30 - Main Results 16:40 - Including Human Feedback with Reward Models & RL 26:05 - The Unknown Effect of Better Data 28:30 - KL Constraint & Connection to Adversarial Examples 37:15 - More Results 39:30 - Understanding the Reward Model 41:50 - Limitations & Broader Impact Paper: https://ift.tt/2ZebyoU Blog: https://ift.tt/2QTPoUr Code: https://ift.tt/2Z3dK2k Samples: https://ift.tt/3jXHu91 My Video on GPT-3: https://youtu.be/SY5PvZrJhLE My Video on GPT-2: https://youtu.be/u1_qMdb0kYU Abstract: As language models become more powerful, training and evaluation are increasingly bottlenecked by the data and metrics used for a particular task. For example, summarization models are often trained to predict human reference summaries and evaluated using ROUGE, but both of these metrics are rough proxies for what we really care about---summary quality. In this work, we show that it is possible to significantly improve summary quality by training a model to optimize for human preferences. We collect a large, high-quality dataset of human comparisons between summaries, train a model to predict the human-preferred summary, and use that model as a reward function to fine-tune a summarization policy using reinforcement learning. We apply our method to a version of the TL;DR dataset of Reddit posts and find that our models significantly outperform both human reference summaries and much larger models fine-tuned with supervised learning alone. Our models also transfer to CNN/DM news articles, producing summaries nearly as good as the human reference without any news-specific fine-tuning. We conduct extensive analyses to understand our human feedback dataset and fine-tuned models. We establish that our reward model generalizes to new datasets, and that optimizing our reward model results in better summaries than optimizing ROUGE according to humans. We hope the evidence from our paper motivates machine learning researchers to pay closer attention to how their training loss affects the model behavior they actually want. Authors: Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M. Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, Paul Christiano Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/2Zo6XRA If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

Saturday, September 5, 2020

Can We Simulate Coalescing Bubbles? 🌊


❤️ Check out Weights & Biases and sign up for a free demo here: https://ift.tt/2YuG7Yf ❤️ Their mentioned post is available here: https://ift.tt/2ztA2QZ 📝 The paper "Constraint Bubbles and Affine Regions: Reduced Fluid Models for Efficient Immersed Bubbles and Flexible Spatial Coarsening" is available here: https://ift.tt/2F2i5vN Check out Blender here (free): https://ift.tt/1IscRzJ If you wish to play with some fluids, try the FLIP Fluids plugin (paid, with free demo): https://flipfluids.com/ Note that Blender also contains Mantaflow, its own fluid simulation program and that's also great! 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://ift.tt/2icTBUb Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m

Friday, September 4, 2020

Second poem by Emi Mahmoud (Quantum Summer Symposium 2020)


Emtithal "Emi" Mahmoud (emi-mahmoud.com) performs the second of two poems she wrote for Google's Quantum Summer Symposium 2020. This presentation was recorded on Day 2 of the event (July 23, 2020). Emi Mahmoud – Poet, Activist, Founder | UNHCR Goodwill Ambassador → https://goo.gle/3lHnTLX Google’s Quantum Summer Symposium 2020 playlist → https://goo.gle/2Z149sN Subscribe to TensorFlow → https://goo.gle/TensorFlow

Quantum Algorithms for Systems of Linear Equations (Quantum Summer Symposium 2020)


Rolando Somma from the Theoretical Division of the Los Alamos National Laboratory talks about quantum algorithms for systems of linear equations. This presentation was recorded on Day 2 of Google's Quantum Summer Symposium 2020 (July 23, 2020). Google’s Quantum Summer Symposium 2020 playlist → https://goo.gle/2Z149sN Subscribe to TensorFlow → https://goo.gle/TensorFlow

Towards a Quantum LINPACK Benchmark (Quantum Summer Symposium 2020)


Lin Lin of the University of California, Berkeley presents on the quantum LINPACK benchmark. This presentation was recorded on Day 2 of the Quantum Summer Symposium 2020 (July 23, 2020). Google’s Quantum Summer Symposium 2020 playlist → https://goo.gle/2Z149sN Subscribe to TensorFlow → https://goo.gle/TensorFlow

Quantum Money (Quantum Summer Symposium 2020)


Peter Shor of MIT presents a new quantum money protocol. This presentation was recorded on Day 2 of Google's Quantum Summer Symposium 2020 (July 23, 2020). Google’s Quantum Summer Symposium 2020 playlist → https://goo.gle/2Z149sN Subscribe to TensorFlow → https://goo.gle/TensorFlow

Quantum Singular Value Transformation (Quantum Summer Symposium 2020)


András Gilyén of Caltech talks about Quantum Singular Value Transformation, a unified framework of quantum algorithms. This presentation was recorded on Day 2 of Google's Quantum Summer Symposium 2020 (July 23, 2020). Google’s Quantum Summer Symposium 2020 playlist → https://goo.gle/2Z149sN Subscribe to TensorFlow → https://goo.gle/TensorFlow

Day 2: Opening keynote by Paul Dabbar (Quantum Summer Symposium 2020)


Paul Dabbar, U.S. Department of Energy Under Secretary for Science, delivers the opening address on the second day of Google's Quantum Summer Symposium 2020. This video was recorded on July 23, 2020 (July 23, 2020). Google’s Quantum Summer Symposium 2020 playlist → https://goo.gle/2Z149sN Subscribe to TensorFlow → https://goo.gle/TensorFlow

A Multi-tool for your Quantum Algorithmic Toolbox (Quantum Summer Symposium 2020)


Shelby Kimmel of Middlebury College presents a tool that can be used to design all kinds of quantum algorithms. This presentation was recorded on Day 2 of Google's Quantum Summer Symposium 2020 (July 23, 2020). Check out the playlist for more videos from QSS 2020. Google’s Quantum Summer Symposium 2020 playlist → https://goo.gle/2Z149sN Subscribe to TensorFlow → https://goo.gle/TensorFlow

Thursday, September 3, 2020

Quantum gravity in the lab (Quantum Summer Symposium '20)


Stefan Leichenauer talks about quantum gravity applied to table-top experiments. This presentation was recorded on Day 1 of the Quantum Summer Symposium 2020. See more videos from the Quantum Summer Symposium '20 playlist → https://goo.gle/2Z149sN Subscribe to TensorFlow → https://goo.gle/TensorFlow

Hartree-Fock on Sycamore (Quantum Summer Symposium 2020)


Nicholas Rubin presents on Hartree-Fock theory on the Sycamore processor. This presentation was recorded on Day 1 of the Quantum Summer Symposium 2020. Google's Quantum Summer Symposium 2020 playlist → https://goo.gle/2Z149sN Subscribe to TensorFlow → https://goo.gle/TensorFlow

Google's quantum computing service (Quantum Summer Symposium '20)


Erik Lucero and Dave Bacon share details about Google's quantum computing service. This presentation was recorded on Day 1 of the Quantum Summer Symposium 2020. See more videos from the Quantum Summer Symposium '20 playlist → https://goo.gle/2Z149sN Subscribe to TensorFlow → https://goo.gle/TensorFlow

Quantum circuit "kinetics" (Quantum Summer Symposium 2020)


Kostyantyn Kechedzhi presents an experiment on quantum circuit "kinetics" using the Sycamore processor. This presentation was recorded on Day 1 of the Quantum Summer Symposium 2020. Google's Quantum Summer Symposium 2020 playlist → https://goo.gle/2Z149sN Subscribe to TensorFlow → https://goo.gle/TensorFlow

Quantum Approximate Optimization (Quantum Summer Symposium 2020)


Matthew Harrigan shares work running quantum approximate optimization on the Sycamore processor. This presentation was recorded on Day 1 of the Quantum Summer Symposium 2020. Google’s Quantum Summer Symposium 2020 playlist → https://goo.gle/2Z149sN Subscribe to TensorFlow → https://goo.gle/TensorFlow

Quantum Chess (Quantum Summer Symposium '20)


Megan Potoski, Chris Cantwell, and Doug Strain introduce quantum chess as a fun tool for quantum education. This presentation was recorded on Day 1 of the Quantum Summer Symposium 2020. See more videos from the Quantum Summer Symposium '20 playlist → https://goo.gle/2Z149sN Subscribe to TensorFlow → https://goo.gle/TensorFlow

Opening keynote by Hartmut Neven (Quantum Summer Symposium 2020)


Hartum Neven gives the opening keynote with updates from Google AI Quantum. This presentation was recorded on Day 1 of the Quantum Summer Symposium 2020. Google's Quantum Summer Symposium 2020 playlist → https://goo.gle/2Z149sN Subscribe to TensorFlow → https://goo.gle/TensorFlow

TensorFlow Quantum (Quantum Summer Symposium 2020)


Murphy Niu introduces TensorFlow Quantum, an open source library for QML. This presentation was recorded on Day 1 of the Quantum Summer Symposium 2020. Google's Quantum Summer Symposium 2020 playlist → https://goo.gle/2Z149sN Subscribe to TensorFlow → https://goo.gle/TensorFlow

Certifiable Random Number Generation (Quantum Summer Symposium '20)


Matthew Harrigan shares work running quantum approximate optimization on the Sycamore processor. This presentation was recorded on Day 1 of the Quantum Summer Symposium 2020. See more videos from the Quantum Summer Symposium '20 playlist → https://goo.gle/2Z149sN Subscribe to TensorFlow → https://goo.gle/TensorFlow

Emtithal Mahmoud live poetry reading at QSS '20 Day 1


Emtithal "Emi" Mahmoud performs the first of her two poems written for the Quantum Summer Symposium. This presentation was recorded on Day 1 of the Quantum Summer Symposium 2020. See more videos from the Quantum Summer Symposium '20 playlist → https://goo.gle/2Z149sN Subscribe to TensorFlow → https://goo.gle/TensorFlow

Fermi-Hubbard on Sycamore (Quantum Summer Symposium '20)


Zhang Jiang presents on how to simulate the Fermi-Hubbard model on the Sycamore processor. This presentation was recorded on Day 1 of the Quantum Summer Symposium 2020. See more videos from the Quantum Summer Symposium '20 playlist → https://goo.gle/2Z149sN Subscribe to TensorFlow → https://goo.gle/TensorFlow

Wednesday, September 2, 2020

Self-classifying MNIST Digits (Paper Explained)


#ai #biology #machinelearning Neural Cellular Automata are models for how living creatures can use local message passing to reach global consensus without a central authority. This paper teaches pixels of an image to communicate with each other and figure out as a group which digit they represent. On the way, the authors have to deal with pesky side-effects that come from applying the Cross-Entropy Loss in combination with a Softmax layer, but ultimately achieve a self-sustaining, stable and continuous algorithm that models living systems. OUTLINE: 0:00 - Intro & Overview 3:10 - Neural Cellular Automata 7:30 - Global Agreement via Message-Passing 11:05 - Neural CAs as Recurrent Convolutions 14:30 - Training Continuously Alive Systems 17:30 - Problems with Cross-Entropy 26:10 - Out-of-Distribution Robustness 27:10 - Chimeric Digits 27:45 - Visualizing Latent State Dimensions 29:05 - Conclusion & Comments Paper: https://ift.tt/2EHCGFa My Video on Neural CAs: https://youtu.be/9Kec_7WFyp0 Abstract: Growing Neural Cellular Automata [1] demonstrated how simple cellular automata (CAs) can learn to self-organise into complex shapes while being resistant to perturbations. Such a computational model approximates a solution to an open question in biology, namely, how do cells cooperate to create a complex multicellular anatomy and work to regenerate it upon damage? The model parameterizing the cells’ rules is parameter-efficient, end-to-end differentiable, and illustrates a new approach to modeling the regulation of anatomical homeostasis. In this work, we use a version of this model to show how CAs can be applied to a common task in machine learning: classification. We pose the question: can CAs use local message passing to achieve global agreement on what digit they compose? Authors: Ettore Randazzo, Alexander Mordvintsev, Eyvind Niklasson, Michael Levin, Sam Greydanus Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/2Zo6XRA If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

Tuesday, September 1, 2020

OpenAI’s Image GPT Completes Your Images With Style!


❤️ Check out Weights & Biases and sign up for a free demo here: https://ift.tt/2YuG7Yf ❤️ Their mentioned post is available here: https://ift.tt/2YSedEE 📝 The paper "Generative Pretraining from Pixels (Image GPT)" is available here: https://ift.tt/2Yap1hh Tweets: Website layout: https://twitter.com/sharifshameem/status/1283322990625607681 Plots: https://twitter.com/aquariusacquah/status/1285415144017797126?s=12 Typesetting math: https://twitter.com/pavtalk/status/1285410751092416513 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://ift.tt/2icTBUb Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m

I Woke Up To $1,000 from an App I made 3 Years Ago!


So, I released an app & it made no money, until 1 day 3 years later I just woke up to $1000 in my bank account. This is the story of how much money my app made. I made a bot that knows everything: https://bit.ly/2G9QusQ Samsays Website: https://bit.ly/2QIEz7m Empleh's Interview: https://bit.ly/31JuSMg Jacob Seeger's Social Media: https://bit.ly/2EC1IGp SUBSCRIBE FOR MORE: http://jabrils.com/yt WISHLIST MY VIDEO GAME: https://ift.tt/33NgHFz SUPPORT ON PATREON: https://ift.tt/2pZACkg JOIN DISCORD: https://ift.tt/2QkDa9O Please follow me on social networks: twitter: https://twitter.com/jabrils_ instagram: https://ift.tt/2QNVYvI REMEMBER TO ALWAYS FEED YOUR CURIOSITY