Thursday, April 28, 2022

Tutorial on Explanations in Interactive Machine Learning @ AAAI-22


Recording of the AAAI-22 Tutorial on Explanations in Interactive Machine Learning. Slides: https://sites.google.com/view/aaai22-ximl-tutorial/home 00:00 Motivation and Challenges - Öznur Alkan (Optum-United Health Group) 14:42 Interacting via Local Explanations - Stefano Teso (University of Trento) 36:47 Interacting via Rule-based Explanations - Elizabeth Daly (IBM Research Ireland) 58:34 Interacting via Concept-based Explanations - Wolfgang Stammer (TU Darmstadt) Description: This tutorial is intended for Artificial Intelligence researchers and practitioners, as well as domain experts interested in human-in-the-loop machine learning, including interactive recommendation and active learning. The participants will gain an understanding of current developments in interactive machine learning from rich human feedback – with an emphasis on white-box interaction and explanation-guided learning – as well as a conceptual map of the variety of methods available and of the relationships between them. The main goal is to inform the audience about the state-of-the-art in explanations for interactive machine learning, open issues and research directions, and how these developments relate to the broader context of machine learning and AI.

Tuesday, April 26, 2022

NVIDIA's Ray Tracing AI - This is The Next Level! 🤯


❤️ Check out Weights & Biases and say hi in their community forum here: https://ift.tt/QZfrc9a 📝 The paper "Neural Control Variates" is available here: https://ift.tt/hg4jOpP https://ift.tt/3SCc7TM 🔆 The free light transport course is available here. You'll love it! https://ift.tt/TE48eOl ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://ift.tt/a9eFJxn - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Paul F, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://ift.tt/a9eFJxn Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/QKFMxB3 Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/CVwpvMH #nvidia #rtx

Monday, April 25, 2022

BBDS-Ramadan 22- 8 : Introduction to Machine Learning


Learning Data Science and AI for FREE in exchange for $150 donation to any Church/Masjid/NGO Register to access training materials https://form.jotform.com/BigBDS/RDonation

Sunday, April 24, 2022

A. I. Learns to Play Starcraft 2 (Reinforcement Learning)


Tinkering with reinforcement learning via Stable Baselines 3 and Starcraft 2. Code and model: https://github.com/Sentdex/SC2RL Stable Baselines 3 tutorial: https://pythonprogramming.net/introduction-reinforcement-learning-stable-baselines-3-tutorial/ Neural Networks from Scratch book: https://nnfs.io Channel membership: https://www.youtube.com/channel/UCfzlCWGWYyIQ0aLC5w48gBQ/join Discord: https://discord.gg/sentdex Reddit: https://www.reddit.com/r/sentdex/ Support the content: https://pythonprogramming.net/support-donate/ Twitter: https://twitter.com/sentdex Instagram: https://instagram.com/sentdex Facebook: https://www.facebook.com/pythonprogramming.net/ Twitch: https://www.twitch.tv/sentdex #artificialintelligence #machinelearning #python

OpenAI’s New AI Writes The Story Of Your Life! ✍️


❤️ Check out Lambda here and sign up for their GPU Cloud: https://ift.tt/1jcgYOm 📝 The post about GPT-3's Edit and Insert capabilities are available here: https://ift.tt/9TSyg4X ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://ift.tt/NvXDYRj - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Paul F, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://ift.tt/NvXDYRj Thumbnail background image credit: https://ift.tt/ayjKplB Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/XBT7aA0 Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/UBo2NZV #OpenAI #GPT3

Saturday, April 23, 2022

A. I. Learns to Play Starcraft 2 (Reinforcement Learning)


Tinkering with reinforcement learning via Stable Baselines 3 and Starcraft 2. Code and model: https://github.com/Sentdex/SC2RL Stable Baselines 3 tutorial: https://pythonprogramming.net/introduction-reinforcement-learning-stable-baselines-3-tutorial/ Neural Networks from Scratch book: https://nnfs.io Channel membership: https://www.youtube.com/channel/UCfzlCWGWYyIQ0aLC5w48gBQ/join Discord: https://discord.gg/sentdex Reddit: https://www.reddit.com/r/sentdex/ Support the content: https://pythonprogramming.net/support-donate/ Twitter: https://twitter.com/sentdex Instagram: https://instagram.com/sentdex Facebook: https://www.facebook.com/pythonprogramming.net/ Twitch: https://www.twitch.tv/sentdex #artificialintelligence #machinelearning #python

Friday, April 22, 2022

LAION-5B: 5 billion image-text-pairs dataset (with the authors)


#laion #clip #dalle LAION-5B is an open, free dataset consisting of over 5 billion image-text-pairs. Today's video is an interview with three of its creators. We dive into the mechanics and challenges of operating at such large scale, how to keep cost low, what new possibilities are enabled with open datasets like this, and how to best handle safety and legal concerns. OUTLINE: 0:00 - Intro 1:30 - Start of Interview 2:30 - What is LAION? 11:10 - What are the effects of CLIP filtering? 16:40 - How big is this dataset? 19:05 - Does the text always come from the alt-property? 22:45 - What does it take to work at scale? 25:50 -When will we replicate DALL-E? 31:30 - The surprisingly efficient pipeline 35:20 - How do you cover the S3 costs? 40:30 - Addressing safety & legal concerns 55:15 - Where can people get started? References: LAION website: https://laion.ai/ LAION Discord: https://ift.tt/yE6LTaM LAION-5B: https://ift.tt/vpE4Cc0 img2dataset tool: https://ift.tt/AVxELkf LAION-400M: https://ift.tt/AQpCnYk Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/QndWaFZ BitChute: https://ift.tt/iceDLjA LinkedIn: https://ift.tt/mdLFEBo BiliBili: https://ift.tt/42EvIDB If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/vt8TxkY Patreon: https://ift.tt/NSwXsWE Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

Thursday, April 21, 2022

OpenAI DALL·E 2: Top 10 Insane Results! 🤖


❤️ Check out Lambda here and sign up for their GPU Cloud: https://ift.tt/j9OdCmF 📝 The paper "Hierarchical Text-Conditional Image Generation with CLIP Latents" is available here: https://ift.tt/gbBaiIs https://ift.tt/tf8nVAO ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://ift.tt/nR0bP5M - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Paul F, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://ift.tt/nR0bP5M Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Chapters 00:00 Intro 00:34 GPT-3 - OpenAI's Text Magic 01:18 Image-GPT Was Born 01:55 Dall-E 02:44 Dall-E 2! 03:30 1. Panda mad scientist 03:55 2. Teddy bear mad scientists 04:20 3. Teddy skating on Times Square 05:05 4. Nebula dunking 05:30 5. Cat Napoleon 05:57 6. Flamingos everywhere! 06:49 7. Don't forget the corgis! 07:43 8. It can do interior design! 08:50 9. Dall-E 2 vs Dall-E 3 09:28 10. Not perfect 09:57 Bonus: Hold on to your papers! 10:18 It draws itself 10:42 One more thing 11:07 Another legendary paper Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/CDsuWQ2 Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/Ql6YDzS #OpenAI #DallE #DallE2

Tuesday, April 19, 2022

Federated Reconstruction for Matrix Factorization (Building recommendation systems with TensorFlow)


Looking to train models for on-device inference without gathering any sensitive user data? Developer Advocate Wei Wei talks about Federated Reconstruction for matrix factorization, a novel technique for building recommendation systems using TensorFlow Federated (TFF). Follow along as he takes you through a cross-device federated learning example. Resources: Federated learning video→ https://goo.gle/3qttKIM TensorFlow Federated → https://goo.gle/3twlycG Collaborative learning video → https://goo.gle/37Wd0DB Federated Reconstruction for Matrix Factorization → https://goo.gle/3wwBRYP A Scalable Approach for Partially Local Federated Learning → https://goo.gle/3wukl7o Federated Reconstruction for Matrix Factorization tutorial → https://goo.gle/3wwBRYP Federated Reconstruction: Partially Local Federated Learning paper → https://goo.gle/3isZNnx TFF FedRecon libraries → https://goo.gle/3wwhLxG Federated Learning Workshop - FLA Research Demos & TFF Tutorials → https://goo.gle/3D3i2cZ Chapters: 0:00 - Introduction 0:55 - What is federated learning? 1:40 - Cross-device federated learning example 5:42 - Code walkthrough 7:15 - Wrap up Watch more Building recommendation systems with TensorFlow → https://goo.gle/3Bi8NUS Subscribe to TensorFlow → https://goo.gle/TensorFlow product: TensorFlow - TensorFlow Recommenders; fullname: Wei Wei;

Sunday, April 17, 2022

Author Interview - Transformer Memory as a Differentiable Search Index


#neuralsearch #interview #google This is an interview with the authors Yi Tay and Don Metzler. Paper Review Video: https://youtu.be/qlB0TPBQ7YY Search engines work by building an index and then looking up things in it. Usually, that index is a separate data structure. In keyword search, we build and store reverse indices. In neural search, we build nearest-neighbor indices. This paper does something different: It directly trains a Transformer to return the ID of the most relevant document. No similarity search over embeddings or anything like this is performed, and no external data structure is needed, as the entire index is essentially captured by the model's weights. The paper experiments with various ways of representing documents and training the system, which works surprisingly well! OUTLINE: 0:00 - Intro 0:50 - Start of Interview 1:30 - How did this idea start? 4:30 - How does memorization play into this? 5:50 - Why did you not compare to cross-encoders? 7:50 - Instead of the ID, could one reproduce the document itself? 10:50 - Passages vs documents 12:00 - Where can this model be applied? 14:25 - Can we make this work on large collections? 19:20 - What's up with the NQ100K dataset? 23:55 - What is going on inside these models? 28:30 - What's the smallest scale to obtain meaningful results? 30:15 - Investigating the document identifiers 34:45 - What's the end goal? 38:40 - What are the hardest problems currently? 40:40 - Final comments & how to get started Paper: https://ift.tt/zmBVToi Abstract: In this paper, we demonstrate that information retrieval can be accomplished with a single Transformer, in which all information about the corpus is encoded in the parameters of the model. To this end, we introduce the Differentiable Search Index (DSI), a new paradigm that learns a text-to-text model that maps string queries directly to relevant docids; in other words, a DSI model answers queries directly using only its parameters, dramatically simplifying the whole retrieval process. We study variations in how documents and their identifiers are represented, variations in training procedures, and the interplay between models and corpus sizes. Experiments demonstrate that given appropriate design choices, DSI significantly outperforms strong baselines such as dual encoder models. Moreover, DSI demonstrates strong generalization capabilities, outperforming a BM25 baseline in a zero-shot setup. Authors: Yi Tay, Vinh Q. Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, Tal Schuster, William W. Cohen, Donald Metzler Links: Merch: https://ift.tt/0P8Qzdk TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/85Phjyo BitChute: https://ift.tt/LxX05an LinkedIn: https://ift.tt/A5I1swC BiliBili: https://ift.tt/kF3XrQj If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/mv6ZxV0 Patreon: https://ift.tt/sxv7Uj0 Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

Python Tutorial - Python for Beginners of machine learning | Data Science | AI


Python tutorial - Python for beginners - Go from Zero to Hero with Python (includes machine learning project) Best way to learn #machinelearning #python #panda #numpy Full Training list will come every week. Do SUBSCRIBE, LIKE & Comments Machine learning Playlist: Please join: My FB:

Saturday, April 16, 2022

Transformer Memory as a Differentiable Search Index (Machine Learning Research Paper Explained)


#dsi #search #google Search engines work by building an index and then looking up things in it. Usually, that index is a separate data structure. In keyword search, we build and store reverse indices. In neural search, we build nearest-neighbor indices. This paper does something different: It directly trains a Transformer to return the ID of the most relevant document. No similarity search over embeddings or anything like this is performed, and no external data structure is needed, as the entire index is essentially captured by the model's weights. The paper experiments with various ways of representing documents and training the system, which works surprisingly well! Sponsor: Diffgram https://ift.tt/TFJdmlo OUTLINE: 0:00 - Intro 0:45 - Sponsor: Diffgram 1:35 - Paper overview 3:15 - The search problem, classic and neural 8:15 - Seq2seq for directly predicting document IDs 11:05 - Differentiable search index architecture 18:05 - Indexing 25:15 - Retrieval and document representation 33:25 - Training DSI 39:15 - Experimental results 49:25 - Comments & Conclusions Paper: https://ift.tt/cZ7D32v Abstract: In this paper, we demonstrate that information retrieval can be accomplished with a single Transformer, in which all information about the corpus is encoded in the parameters of the model. To this end, we introduce the Differentiable Search Index (DSI), a new paradigm that learns a text-to-text model that maps string queries directly to relevant docids; in other words, a DSI model answers queries directly using only its parameters, dramatically simplifying the whole retrieval process. We study variations in how documents and their identifiers are represented, variations in training procedures, and the interplay between models and corpus sizes. Experiments demonstrate that given appropriate design choices, DSI significantly outperforms strong baselines such as dual encoder models. Moreover, DSI demonstrates strong generalization capabilities, outperforming a BM25 baseline in a zero-shot setup. Authors: Yi Tay, Vinh Q. Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, Tal Schuster, William W. Cohen, Donald Metzler Links: Merch: https://ift.tt/MT45pbH TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/FvdtSz5 BitChute: https://ift.tt/nfsduED LinkedIn: https://ift.tt/mvSaQ6t BiliBili: https://ift.tt/NP8oCRE If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/CFq8A13 Patreon: https://ift.tt/r1NmzXi Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

Friday, April 15, 2022

Supervised Learning Algorithms Quiz - Machine Learning


Video Contains: 1. 10 Short quizzes 2. You can practice Supervised Learning Algorithms Quiz - Machine Learning at the following URL - https://www.gopichandrakesan.com/machine-learning-supervised-learning/ Machine Learning - https://www.youtube.com/watch?v=L8nuQ... Artificial Intelligence - https://www.youtube.com/watch?v=6VoVb... 🔗 Social Medias🔗 🌎 Website: https://gopichandrakesan.com/ LinkedIn: https://www.linkedin.com/in/gopichand... Facebook: https://www.facebook.com/RCGopiTechie Twitter: https://twitter.com/RCGopiTechie GitHub: https://github.com/rcgopi100 ✪Tags ✪ - Supervised Learning Algorithms - Machine Learning - Artificial Intelligence - Deep Learning ✪ Hashtags ✪ #Quiz #SupervisedLearning #MachineLearning #ArtificialIntelligence #DeepLearning #AI #ML

Thursday, April 14, 2022

How-To: Use AI Features for Quicker Photo Editing (Video Tutorial)


In this CreativePro Week Sneak Peek video, Jeff Carlson talks about AI, or machine learning. He demos Photoshop using AI to intelligently find and isolate individual objects in a photo. He then moves over to InDesign and crops an image using Content Aware Fit. This is a sneak peek from Jeff’s session at CreativePro Week 2022 in Washington, D.C. May 9-13. Details on this amazing how-to conference can be found at: https://CreativeProWeek.com. 🔌 CONNECT WITH US If you use InDesign, Photoshop, Illustrator, or Acrobat, CreativePro Network is your best resource to master the tools and raise your skillset to the next level. 🔔 Subscribe for more essential design tips - https://www.youtube.com/c/creativepro?sub_confirmation=1 💡 Sign up to receive our weekly roundup of essential HOW-TO resources - https://creativepro.com/become-a-member/ 🚀  Increase your productivity by attending a CreativePro Event - https://creativepro.com/events 🤯 Learn mind-blowing tips, techniques, and best practices at CreativePro Week - https://CreativeProWeek.com 👉 Access essential HOW-TO articles, books, magazines, downloadables, and more - https://CreativePro.com ►This video is sponsored by CI-Hub The philosophy behind CI HUB Connector for Adobe CC and Microsoft is to connect you with data domains throughout your marketing ecosystems. The CI HUB Connector is an In-app single source of access to your brands’ digital assets. Simple, fast and without additional cost. Our portfolio of data domain partners covers both on-premise and cloud-hosted solutions with data models for DAM, MAM, PIM, MDM or CMS. In addition to marketing data solutions we also connect to stock providers and cloud storage services. Together with our system vendors, CI HUB creates the best possible connection, and we are always motivated by our customers to deliver seamless access to data domains in their marketing ecosystems – e.g. Adobe Photoshop, Adobe InDesign, Adobe Illustrator or Adobe Premiere Pro. And also in Microsoft Power Point, Word and Excel. https://ci-hub.com/ https://www.facebook.com/CIHUBGMBH https://twitter.com/ci_hub_gmbh https://www.linkedin.com/company/20090498/admin/ https://www.youtube.com/channel/UCJP5Oa3bj31DsTjQ0Y540Og/videos https://www.instagram.com/cihub.de/

Tutorial - Optimizing the Performance of Fog Computing Environments Using AI and Co-Simulation


Shreshth Tuli and Giuliano Casale Optimizing the Performance of Fog Computing Environments Using AI and Co-Simulation

Wednesday, April 13, 2022

FAIRmat Tutorials 3: Overview of the artificial-intelligence toolkit


Luca Ghiringhelli gives an overview of the NOMAD artificial-intelligence toolkit. More info on the FAIRmat tutorial series here website: https://www.fair-di.eu/fairmat-tutorials-home Tutorial 3: https://youtube.com/playlist?list=PLrRaxjvn6FDWZS2Gacn92Jyl8NA-FY1z2

How-To: Use AI Features for Quicker Photo Editing (Video Tutorial)


In this CreativePro Week Sneak Peek video, Jeff Carlson talks about AI, or machine learning. He demos Photoshop using AI to intelligently find and isolate individual objects in a photo. He then moves over to InDesign and crops an image using Content Aware Fit. This is a sneak peek from Jeff’s session at CreativePro Week 2022 in Washington, D.C. May 9-13. Details on this amazing how-to conference can be found at: https://CreativeProWeek.com. 🔌 CONNECT WITH US If you use InDesign, Photoshop, Illustrator, or Acrobat, CreativePro Network is your best resource to master the tools and raise your skillset to the next level. 🔔 Subscribe for more essential design tips - https://www.youtube.com/c/creativepro?sub_confirmation=1 💡 Sign up to receive our weekly roundup of essential HOW-TO resources - https://creativepro.com/become-a-member/ 🚀  Increase your productivity by attending a CreativePro Event - https://creativepro.com/events 🤯 Learn mind-blowing tips, techniques, and best practices at CreativePro Week - https://CreativeProWeek.com 👉 Access essential HOW-TO articles, books, magazines, downloadables, and more - https://CreativePro.com ►This video is sponsored by CI-Hub The philosophy behind CI HUB Connector for Adobe CC and Microsoft is to connect you with data domains throughout your marketing ecosystems. The CI HUB Connector is an In-app single source of access to your brands’ digital assets. Simple, fast and without additional cost. Our portfolio of data domain partners covers both on-premise and cloud-hosted solutions with data models for DAM, MAM, PIM, MDM or CMS. In addition to marketing data solutions we also connect to stock providers and cloud storage services. Together with our system vendors, CI HUB creates the best possible connection, and we are always motivated by our customers to deliver seamless access to data domains in their marketing ecosystems – e.g. Adobe Photoshop, Adobe InDesign, Adobe Illustrator or Adobe Premiere Pro. And also in Microsoft Power Point, Word and Excel. https://ci-hub.com/ https://www.facebook.com/CIHUBGMBH https://twitter.com/ci_hub_gmbh https://www.linkedin.com/company/20090498/admin/ https://www.youtube.com/channel/UCJP5Oa3bj31DsTjQ0Y540Og/videos https://www.instagram.com/cihub.de/

Tuesday, April 12, 2022

Building an on-device recommendation model with TFLite


Developer Advocate Wei Wei shares how to use the adaptive on-device recommendation framework based on TensorFlow Lite to build an on-device recommendation model. Resources: TensorFlow Lite homepage → https://goo.gle/3NeEnc7 Adaptive Framework for On-device Recommendation blog → https://goo.gle/36nDiy3 TFLite recommendation overview → https://goo.gle/36f9cNn TFLite recommendation code → ​​https://goo.gle/3qtkILP TFLite Model Maker → https://goo.gle/3iw3kS8 Chapters: 0:00 - Introduction 0:19 - Why to use recommendations on device 3:39 - Code walkthrough 6:51- Wrap up Watch more Building recommendation systems with TensorFlow → https://goo.gle/3Bi8NUS Subscribe to TensorFlow → https://goo.gle/TensorFlow product: TensorFlow - TensorFlow Recommenders; fullname: Wei Wei;

FAIRmat Tutorials 3: Overview of the artificial-intelligence toolkit


Luca Ghiringhelli gives an overview of the NOMAD artificial-intelligence toolkit. More info on the FAIRmat tutorial series here website: https://www.fair-di.eu/fairmat-tutorials-home Tutorial 3: https://youtube.com/playlist?list=PLrRaxjvn6FDWZS2Gacn92Jyl8NA-FY1z2

Saturday, April 9, 2022

Sentiment Analysis Machine learning prototype | AI Assignment


MIT'S AI: Reconstruct your portrait only using your voice!


@Massachusetts Institute of Technology (MIT) #MIT #Speech2face #ai_research Research paper 👇 https://arxiv.org/abs/1905.09773 For Hindi Shorts and Video of Axis Click here 👇 https://youtube.com/channel/UCnWVp_3nPqgMtuuRSBE3AyQ Turning speech into text has become so common that i’s a part of almost every smartphone. But have you ever thought about turning your speech into a portrait? Researchers have, and they’ve even made it possible. Artificial intelligence scientists at MIT’S Computer Science and Artificial Intelligence Laboratory (CSAIL) have created AI that turns short snippets of audio speech recording into a human face. As if this weren’t both stunning and creepy enough, the results are actually fairly accurate, too! The CSAIL researchers published a paper about their invention back in 2019. It’s an algorithm called, not surprisingly, Speech2Face, and the name says it all. In the demo, you can take a peek at how it works and what are the results. At the very top of the page, you’ll hear the audio snippets of different people speaking. Their real photo is just for your reference, and Speech2Face recreated their portrait based only on a three-second recording of their voice. Interestingly enough, the AI seems to be working better when the audio clips are longer. The researchers have shared some examples of faces recreated from three versus six seconds of speech. Of course, the results are still far from perfect, but they’re still amazing and eerily accurate. Still, the AI sometimes completely misses the point and mixes up the gender, age, and ethnicity of the subject: Even though the algorithm was created for scientific purposes only, the question of privacy has been raised. The team claims that their method “cannot recover the true identity of a person from their voice,” i.e. recreate an exact image of their face. However, if the algorithm becomes so sophisticated that it could recreate super-realistic faces, what impact could it have? The first thought that comes to my mind is that technology like this could be of immense help to police officers and detectives… Or I’m just looking too many crime TV shows. On the other hand, it could have a negative impact on YouTube and TikTok stars who’re trying to save their private life from followers so they only do voiceovers and don’t appear in front of the camera. But like every technology, I guess this one could be super-useful in good hands, and dangerous in bad ones. The research was published in arxiv. Source: arxiv keywords: artificial Intelligence | artificial Intelligence class 9 | artificial Intelligence tutorial | artificial Intelligence in hindi | artificial Intelligence and data science engineering | artificial Intelligence and machine learning | artificial Intelligence in Tamil | artificial Intelligence full course | artificial Intelligence in telugu | artificial Intelligence playlist | machine learning tutorial | machine learning course | machine learning projects | machine learning python | machine learning full course | machine learning in hindi | machine learning playlist | machine learning interview questions | machine learning code with harry | machine learning roadmap | deep learning tutorial | deep learning krish naik | Deep learning projects | deep learning full course | deep learning in hindi | deep learning andrew ng | Deep learning course | deep learning ai | deep learning mit | deep learning interview questions | robotics engineering | robotics kanti | robotics project | robotics dance | robotics course | robotics and automation engineering | robotics movie | computer vision | computer vision tutorial | computer vision project | computer vision syndrome | computer vision course | computer vision in artificial Intelligence in hindi| computer vision nptel | computer vision and image processing | computer vision python | computer vision interview questions | computer vision in hindi | ai news anchor | ai news hindi | ai news daily | ai news 2022 | ai news today | ai news app | ai news 2021 | ai news stories | ai news presenter | ai news anchor korean | ai research | ai research scientist | ai research super cluster | ai research paper | ai research topics | ai research 2021 | ai research | ibm research ai | meta ai research | google ai research | software | software developer short film | software developer | software engineering | software testing course | software developer bgm | software developer kese bane | software testing | software subramanian | software full movie in telugu | technology | technology ai news | ai technology in future | ai technology 2021 | ai technology 2022 | ai technology in telugu | ai technology scary | ai technology in telugu | ai technology elon musk | ai technology documentary |

OpenAI’s New AI Thinks That Birds Aren’t Real! 🕊️


❤️ Check out Lambda here and sign up for their GPU Cloud: https://ift.tt/PhxcZfv 📝 The #OpenAI paper "Aligning Language Models to Follow Instructions" is available here: https://ift.tt/nKszUP2 ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://ift.tt/W2whBCy - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Paul F, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://ift.tt/W2whBCy Thumbnail background image credit: - https://ift.tt/kFdaLZv - https://ift.tt/oyFVExO Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://ift.tt/qktYwnx 00:00 Intro 01:16 Moon landing 02:10 Round 1 - Smashing pumpkins 03:36 Round 2 - Code summarization 04:23 Round 3 - Frog poem! 05:06 Users love it 05:50 What? Birds aren't real? Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/vHCkbyQ Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/kLWcRs6

Friday, April 8, 2022

MIT'S AI: Reconstruct your portrait only using your voice!


@Massachusetts Institute of Technology (MIT) #MIT #Speech2face #ai_research Research paper 👇 https://arxiv.org/abs/1905.09773 For Hindi Shorts and Video of Axis Click here 👇 https://youtube.com/channel/UCnWVp_3nPqgMtuuRSBE3AyQ Turning speech into text has become so common that i’s a part of almost every smartphone. But have you ever thought about turning your speech into a portrait? Researchers have, and they’ve even made it possible. Artificial intelligence scientists at MIT’S Computer Science and Artificial Intelligence Laboratory (CSAIL) have created AI that turns short snippets of audio speech recording into a human face. As if this weren’t both stunning and creepy enough, the results are actually fairly accurate, too! The CSAIL researchers published a paper about their invention back in 2019. It’s an algorithm called, not surprisingly, Speech2Face, and the name says it all. In the demo, you can take a peek at how it works and what are the results. At the very top of the page, you’ll hear the audio snippets of different people speaking. Their real photo is just for your reference, and Speech2Face recreated their portrait based only on a three-second recording of their voice. Interestingly enough, the AI seems to be working better when the audio clips are longer. The researchers have shared some examples of faces recreated from three versus six seconds of speech. Of course, the results are still far from perfect, but they’re still amazing and eerily accurate. Still, the AI sometimes completely misses the point and mixes up the gender, age, and ethnicity of the subject: Even though the algorithm was created for scientific purposes only, the question of privacy has been raised. The team claims that their method “cannot recover the true identity of a person from their voice,” i.e. recreate an exact image of their face. However, if the algorithm becomes so sophisticated that it could recreate super-realistic faces, what impact could it have? The first thought that comes to my mind is that technology like this could be of immense help to police officers and detectives… Or I’m just looking too many crime TV shows. On the other hand, it could have a negative impact on YouTube and TikTok stars who’re trying to save their private life from followers so they only do voiceovers and don’t appear in front of the camera. But like every technology, I guess this one could be super-useful in good hands, and dangerous in bad ones. The research was published in arxiv. Source: arxiv keywords: artificial Intelligence | artificial Intelligence class 9 | artificial Intelligence tutorial | artificial Intelligence in hindi | artificial Intelligence and data science engineering | artificial Intelligence and machine learning | artificial Intelligence in Tamil | artificial Intelligence full course | artificial Intelligence in telugu | artificial Intelligence playlist | machine learning tutorial | machine learning course | machine learning projects | machine learning python | machine learning full course | machine learning in hindi | machine learning playlist | machine learning interview questions | machine learning code with harry | machine learning roadmap | deep learning tutorial | deep learning krish naik | Deep learning projects | deep learning full course | deep learning in hindi | deep learning andrew ng | Deep learning course | deep learning ai | deep learning mit | deep learning interview questions | robotics engineering | robotics kanti | robotics project | robotics dance | robotics course | robotics and automation engineering | robotics movie | computer vision | computer vision tutorial | computer vision project | computer vision syndrome | computer vision course | computer vision in artificial Intelligence in hindi| computer vision nptel | computer vision and image processing | computer vision python | computer vision interview questions | computer vision in hindi | ai news anchor | ai news hindi | ai news daily | ai news 2022 | ai news today | ai news app | ai news 2021 | ai news stories | ai news presenter | ai news anchor korean | ai research | ai research scientist | ai research super cluster | ai research paper | ai research topics | ai research 2021 | ai research | ibm research ai | meta ai research | google ai research | software | software developer short film | software developer | software engineering | software testing course | software developer bgm | software developer kese bane | software testing | software subramanian | software full movie in telugu | technology | technology ai news | ai technology in future | ai technology 2021 | ai technology 2022 | ai technology in telugu | ai technology scary | ai technology in telugu | ai technology elon musk | ai technology documentary |

Thursday, April 7, 2022

AI anno 2022


Rens ter Weijde, CEO of kimo.ai, speaks about Artificial Intelligence anno 2022 from a couple of different angles: market, technology, and geopolitics. KIMO webinar hosted on April 5th 2022.

FAIRmat Tutorials 3: Querying the Archive, AI analysis, and the AI-toolkit local app


Luigi Sbailò demonstrates: Querying the Archive, Artificial-Intelligence analysis, and (23:40) introducing the AI-toolkit local app More info on the FAIRmat tutorial series here website: https://www.fair-di.eu/fairmat-tutorials-home Tutorial 3: https://youtube.com/playlist?list=PLrRaxjvn6FDWZS2Gacn92Jyl8NA-FY1z2

Wednesday, April 6, 2022

Machine Learning Basic Concepts | Artificial Intelligence And Machine Learning Tutorial


AI/ML—short for artificial intelligence (AI) and machine learning (ML)—represents an important evolution in computer science and data processing that is quickly transforming a vast array of industries. As businesses and other organizations undergo digital transformation, they’re faced with a growing tsunami of data that is at once incredibly valuable and increasingly burdensome to collect, process and analyze. New tools and methodologies are needed to manage the vast quantity of data being collected, to mine it for insights and to act on those insights when they’re discovered. organization Learn more What is AI? Artificial intelligence generally refers to processes and algorithms that are able to simulate human intelligence, including mimicking cognitive functions such as perception, learning and problem solving. Machine learning and deep learning (DL) are subsets of AI. Specific practical applications of AI include modern web search engines, personal assistant programs that understand spoken language, self-driving vehicles and recommendation engines, such as those used by Spotify and Netflix. There are four levels or types of AI—two of which we have achieved, and two which remain theoretical at this stage. 4 types of AI In order from simplest to most advanced, the four types of AI include reactive machines, limited memory, theory of mind and self-awareness. Reactive machines are able to perform basic operations based on some form of input. At this level of AI, no "learning" happens—the system is trained to do a particular task or set of tasks and never deviates from that. These are purely reactive machines that do not store inputs, have any ability to function outside of a particular context, or have the ability to evolve over time. Examples of reactive machines include most recommendation engines, IBM’s Deep Blue chess AI, and Google’s AlphaGo AI (arguably the best Go player in the world). Limited memory AI systems are able to store incoming data and data about any actions or decisions it makes, and then analyze that stored data in order to improve over time. This is where "machine learning" really begins, as limited memory is required in order for learning to happen. Since limited memory AIs are able to improve over time, these are the most advanced AIs we have developed to date. Examples include self-driving vehicles, virtual voice assistants and chatbots. Theory of mind is the first of the two more advanced and (currently) theoretical types of AI that we haven’t yet achieved. At this level, AIs would begin to understand human thoughts and emotions, and start to interact with us in a meaningful way. Here, the relationship between human and AI becomes reciprocal, rather than the simple one-way relationship humans have with various less advanced AIs now. The "theory of mind" terminology comes from psychology, and in this case refers to an AI understanding that humans have thoughts and emotions which then, in turn, affect the AI’s behavior. Self-awareness is considered the ultimate goal for many AI developers, wherein AIs have human-level consciousness, aware of themselves as beings in the world with similar desires and emotions as humans. As yet, self-aware AIs are purely the stuff of science fiction. What is ML? In a nutshell, machine learning is a subset of AI that falls within the "limited memory" category in which the AI (machine) is able to learn and develop over time. There are a variety of different machine learning algorithms, with the three primary types being supervised learning, unsupervised learning and reinforcement learning.

Adobe’s New AI: Next Level Cat Videos! 🐈


❤️ Check out Cohere and sign up for free today: https://ift.tt/qV0SCtH 📝 The paper "GANgealing GAN-Supervised Dense Visual Alignment" is available here: https://ift.tt/NVFKvpm Note that this work is a collaboration between Adobe Research, UC Berkeley, CMU and MIT CSAIL. Try it!: - https://ift.tt/khuo7dK - https://ift.tt/Mmn7kWB ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://ift.tt/r1ag6sR - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Paul F, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://ift.tt/r1ag6sR Thumbnail background image credit: https://ift.tt/WPtm2rp Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu The background is an illustration. Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://ift.tt/IrtVoXW Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/wEo24DI Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/BHwerYn #Adobe

Tuesday, April 5, 2022

Machine Learning Basic Concepts | Artificial Intelligence And Machine Learning Tutorial


AI/ML—short for artificial intelligence (AI) and machine learning (ML)—represents an important evolution in computer science and data processing that is quickly transforming a vast array of industries. As businesses and other organizations undergo digital transformation, they’re faced with a growing tsunami of data that is at once incredibly valuable and increasingly burdensome to collect, process and analyze. New tools and methodologies are needed to manage the vast quantity of data being collected, to mine it for insights and to act on those insights when they’re discovered. organization Learn more What is AI? Artificial intelligence generally refers to processes and algorithms that are able to simulate human intelligence, including mimicking cognitive functions such as perception, learning and problem solving. Machine learning and deep learning (DL) are subsets of AI. Specific practical applications of AI include modern web search engines, personal assistant programs that understand spoken language, self-driving vehicles and recommendation engines, such as those used by Spotify and Netflix. There are four levels or types of AI—two of which we have achieved, and two which remain theoretical at this stage. 4 types of AI In order from simplest to most advanced, the four types of AI include reactive machines, limited memory, theory of mind and self-awareness. Reactive machines are able to perform basic operations based on some form of input. At this level of AI, no "learning" happens—the system is trained to do a particular task or set of tasks and never deviates from that. These are purely reactive machines that do not store inputs, have any ability to function outside of a particular context, or have the ability to evolve over time. Examples of reactive machines include most recommendation engines, IBM’s Deep Blue chess AI, and Google’s AlphaGo AI (arguably the best Go player in the world). Limited memory AI systems are able to store incoming data and data about any actions or decisions it makes, and then analyze that stored data in order to improve over time. This is where "machine learning" really begins, as limited memory is required in order for learning to happen. Since limited memory AIs are able to improve over time, these are the most advanced AIs we have developed to date. Examples include self-driving vehicles, virtual voice assistants and chatbots. Theory of mind is the first of the two more advanced and (currently) theoretical types of AI that we haven’t yet achieved. At this level, AIs would begin to understand human thoughts and emotions, and start to interact with us in a meaningful way. Here, the relationship between human and AI becomes reciprocal, rather than the simple one-way relationship humans have with various less advanced AIs now. The "theory of mind" terminology comes from psychology, and in this case refers to an AI understanding that humans have thoughts and emotions which then, in turn, affect the AI’s behavior. Self-awareness is considered the ultimate goal for many AI developers, wherein AIs have human-level consciousness, aware of themselves as beings in the world with similar desires and emotions as humans. As yet, self-aware AIs are purely the stuff of science fiction. What is ML? In a nutshell, machine learning is a subset of AI that falls within the "limited memory" category in which the AI (machine) is able to learn and develop over time. There are a variety of different machine learning algorithms, with the three primary types being supervised learning, unsupervised learning and reinforcement learning.

Item-to-item recommendation and sequential recommendation


Learn about item-to-item recommendation and sequential recommendation, two popular retrieval models, for TensorFlow Recommenders with Developer Advocate Wei Wei. Resources: Item-to-item recommendation → https://goo.gle/3x6vGuU Recommending movies: retrieval using a sequential model → https://goo.gle/3DGhn1a Recurrent Neural Networks with Top-k Gains for Session-based Recommendations → https://goo.gle/36V48hi Chapters: 0:00 - Introduction 0:24 - item-to-item recommendation 1:14 - Sequential recommendation 3:45 - Recap Watch more Building recommendation systems with TensorFlow → https://goo.gle/3Bi8NUS Subscribe to TensorFlow → https://goo.gle/TensorFlow product: TensorFlow - TensorFlow Recommenders; fullname: Wei Wei;

Monday, April 4, 2022

The Weird and Wonderful World of AI Art (w/ Author Jack Morris)


#aiart #deeplearning #clip Since the release of CLIP, the world of AI art has seen an unprecedented level of acceleration in what's possible to do. Whereas image generation had previously been mostly in the domain of scientists, now a community of professional artists, researchers, and amateurs are sending around colab notebooks and sharing their creations via social media. How did this happen? What is going on? And where do we go from here? Jack Morris and I attempt to answer some of these questions, following his blog post "The Weird and Wonderful World of AI Art" (linked below). OUTLINE: 0:00 - Intro 2:30 - How does one get into AI art? 5:00 - Deep Dream & Style Transfer: the early days of art in deep learning 10:50 - The advent of GANs, ArtBreeder and TikTok 19:50 - Lacking control: Pre-CLIP art 22:40 - CLIP & DALL-E 30:20 - The shift to shared colabs 34:20 - Guided diffusion models 37:20 - Prompt engineering for art models 43:30 - GLIDE 47:00 - Video production & Disco Diffusion 48:40 - Economics, money, and NFTs 54:15 - What does the future hold for AI art? Blog post: https://ift.tt/RoNQjGJ Jack's Blog: https://jxmo.io/ Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/fw2MyVG BitChute: https://ift.tt/0MgSzQn LinkedIn: https://ift.tt/qsGiZP7 BiliBili: https://ift.tt/97VaGI0 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/PtHplKY Patreon: https://ift.tt/rth7xKj Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

Sunday, April 3, 2022

Neural Network Tutorial: Day 2 Difference between Machine Learning and Neural Network


This 28 days tutorial series is created to make you familiar with fundamental concepts of Neural networks and Deep Learning in the simplest terms. In this tutorial, we have covered the basic difference between ML and DL Join our discord community for more updates and a Free masterclass on the latest tools and technologies: Discord Server: https://discord.gg/ruWdHuScT4 Linkedin: https://www.linkedin.com/company/letthedataconfess/ Join the waiting list for our upcoming live training program on Deep Learning to get the additional discount: https://www.letthedataconfess.com/training/ Want to Join free masterclass? Join here: https://www.letthedataconfess.com/free-masterclass/

Author Interview - Improving Intrinsic Exploration with Language Abstractions


#reinforcementlearning #ai #explained This is an interview with Jesse Mu, first author of the paper. Original Paper Review: https://youtu.be/NeGJAUSQEJI Exploration is one of the oldest challenges for Reinforcement Learning algorithms, with no clear solution to date. Especially in environments with sparse rewards, agents face significant challenges in deciding which parts of the environment to explore further. Providing intrinsic motivation in form of a pseudo-reward is sometimes used to overcome this challenge, but often relies on hand-crafted heuristics, and can lead to deceptive dead-ends. This paper proposes to use language descriptions of encountered states as a method of assessing novelty. In two procedurally generated environments, they demonstrate the usefulness of language, which is in itself highly concise and abstractive, which lends itself well for this task. OUTLINE: 0:00 - Intro 0:55 - Paper Overview 4:30 - Aren't you just adding extra data? 9:35 - Why are you splitting up the AMIGo teacher? 13:10 - How do you train the grounding network? 16:05 - What about causally structured environments? 17:30 - Highlights of the experimental results 20:40 - Why is there so much variance? 22:55 - How much does it matter that we are testing in a video game? 27:00 - How does novelty interface with the goal specification? 30:20 - The fundamental problems of exploration 32:15 - Are these algorithms subject to catastrophic forgetting? 34:45 - What current models could bring language to other environments? 40:30 - What does it take in terms of hardware? 43:00 - What problems did you encounter during the project? 46:40 - Where do we go from here? Paper: https://ift.tt/4M0gHZn Abstract: Reinforcement learning (RL) agents are particularly hard to train when rewards are sparse. One common solution is to use intrinsic rewards to encourage agents to explore their environment. However, recent intrinsic exploration methods often use state-based novelty measures which reward low-level exploration and may not scale to domains requiring more abstract skills. Instead, we explore natural language as a general medium for highlighting relevant abstractions in an environment. Unlike previous work, we evaluate whether language can improve over existing exploration methods by directly extending (and comparing to) competitive intrinsic exploration baselines: AMIGo (Campero et al., 2021) and NovelD (Zhang et al., 2021). These language-based variants outperform their non-linguistic forms by 45-85% across 13 challenging tasks from the MiniGrid and MiniHack environment suites. Authors: Jesse Mu, Victor Zhong, Roberta Raileanu, Minqi Jiang, Noah Goodman, Tim Rocktäschel, Edward Grefenstette Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/r42ckGn BitChute: https://ift.tt/nFTmqpL LinkedIn: https://ift.tt/JiPFkHb BiliBili: https://ift.tt/zLrP8o3 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/qLFCcwH Patreon: https://ift.tt/mQX8pga Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

Saturday, April 2, 2022

Waymo's AI Recreates San Francisco From 2.8 Million Photos! 🚘


❤️ Check out Weights & Biases and sign up for a free demo here: https://ift.tt/2x6OKa5 ❤️ Their mentioned post is available here (Thank you Soumik Rakshit!): https://ift.tt/jX3Scn4 📝 The paper "Block-NeRF Scalable Large Scene Neural View Synthesis" from #Waymo is available here: https://ift.tt/JWmPSXE ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://ift.tt/UAxMq3Y - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Paul F, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://ift.tt/UAxMq3Y Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://ift.tt/An6qCXG Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/OG514MB Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/njWkdrT #BlockNeRF

Explaining Machine Learning to Kids, with Dale Lane


Dale Lane, a developer for IBM, demonstrates free tools from IBM, MIT, Mozilla, and Google that can be used by children to learn about machine learning through hands-on creative activities. See also Dale's web site: https://machinelearningforkids.co.uk/ Host: Eyal Wirsansky

Improving Intrinsic Exploration with Language Abstractions (Machine Learning Paper Explained)


#reinforcementlearning #ai #explained Exploration is one of the oldest challenges for Reinforcement Learning algorithms, with no clear solution to date. Especially in environments with sparse rewards, agents face significant challenges in deciding which parts of the environment to explore further. Providing intrinsic motivation in form of a pseudo-reward is sometimes used to overcome this challenge, but often relies on hand-crafted heuristics, and can lead to deceptive dead-ends. This paper proposes to use language descriptions of encountered states as a method of assessing novelty. In two procedurally generated environments, they demonstrate the usefulness of language, which is in itself highly concise and abstractive, which lends itself well for this task. OUTLINE: 0:00 - Intro 1:10 - Paper Overview: Language for exploration 5:40 - The MiniGrid & MiniHack environments 7:00 - Annotating states with language 9:05 - Baseline algorithm: AMIGo 12:20 - Adding language to AMIGo 22:55 - Baseline algorithm: NovelD and Random Network Distillation 29:45 - Adding language to NovelD 31:50 - Aren't we just using extra data? 34:55 - Investigating the experimental results 40:45 - Final comments Paper: https://ift.tt/hJB79dv Abstract: Reinforcement learning (RL) agents are particularly hard to train when rewards are sparse. One common solution is to use intrinsic rewards to encourage agents to explore their environment. However, recent intrinsic exploration methods often use state-based novelty measures which reward low-level exploration and may not scale to domains requiring more abstract skills. Instead, we explore natural language as a general medium for highlighting relevant abstractions in an environment. Unlike previous work, we evaluate whether language can improve over existing exploration methods by directly extending (and comparing to) competitive intrinsic exploration baselines: AMIGo (Campero et al., 2021) and NovelD (Zhang et al., 2021). These language-based variants outperform their non-linguistic forms by 45-85% across 13 challenging tasks from the MiniGrid and MiniHack environment suites. Authors: Jesse Mu, Victor Zhong, Roberta Raileanu, Minqi Jiang, Noah Goodman, Tim Rocktäschel, Edward Grefenstette Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/r42ckGn BitChute: https://ift.tt/nFTmqpL LinkedIn: https://ift.tt/JiPFkHb BiliBili: https://ift.tt/zLrP8o3 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/qLFCcwH Patreon: https://ift.tt/mQX8pga Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n