Saturday, October 31, 2020

Simulating Dragons Under Cloth Sheets! 🐲


❤️ Check out Weights & Biases and sign up for a free demo here: https://ift.tt/2YuG7Yf ❤️ Their mentioned post is available here: https://ift.tt/3oJkQEu 📝 The paper "Local Optimization for Robust Signed Distance Field Collision" is available here: https://ift.tt/3oLB5Ri 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://ift.tt/2icTBUb Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m

Friday, October 30, 2020

T9. Xarxes neuronals convolucionals (CNN, ConvNet) - Xavier Giró - UPC GDSA 2020


https://ift.tt/32OODCw L'objectiu de l'assignatura és el desenvolupament de xarxes neuronals profundes que permetin resoldre problemes d’intel·ligència artificial. Aquestes eines d’aprenentatge automàtic estimen els seus paràmetres a partir d’unes dades d’entrenament i un criteri d’optimització. L’assignatura presenta els tipus de capes més utilitzades en aquestes xarxes, així com els algoritmes i metodologies d’optimització més populars. Els estudiants seran capaços implementar-les en programari, així com monitoritzar el seu entrenament i diagnosticar quines accions poden millorar-ne el funcionament. El curs se centra en aplicacacions de xarxes neuronals profundes relacionades amb la gestió i distribució de senyals audiovisuals.

Let's Build an Operating System! LIVE


Operating Systems are the foundation upon which we build software programs, but how do they work? In this episode of the "Let's Build an X" game show, we'll learn about how Operating Systems work as I demo a few simple examples in the Assembly Language. No previous assembly experience required! In under an hour, we'll build a tiny OS and discuss ways Machine Learning can optimize it. You'll have to answer 3 timed multiple choice questions throughout the stream and I'll declare a winner at the end. The winner receives a cash prize during the Victory Royale award ceremony at the end (also I'll freestyle + sing for them). Subscribe for more educational videos!

Wednesday, October 28, 2020

Twitter sentiment analysis by Benson Ruan - Made With TensorFlow.js


In our 6th episode of Made With TensorFlow.js we head to Australia to join Benson Ruan, who has used Natural Language Processing to understand the sentiment of tweets and is able to visualize the results. Now we can monitor in real time user sentiment as people react to any given topic. Hosted by Jason Mayes, Developer Advocate for TensorFlow.js. Twitter Sentiment Analysis by Benson Ruan → https://goo.gle/2E7i9di Watch more of Made With TensorFlow.js → http://goo.gle/made-with-tfjs Subscribe to TensorFlow to stay up to date → https://goo.gle/TensorFlow #TensorFlow #TensorFlowJS #MadeWithTFJS #JavaScript #CreativeCoding #WebDev #NLP #NaturalLanguageProcessing #Sentiment #TextAnalysis #TwitterAP

Neural Networks from Scratch (NNFS) in Print!


Get the book: https://nnfs.io twitters: twitter.com/sentdex twitter.com/daniel_kukiela Channel membership: https://www.youtube.com/channel/UCfzlCWGWYyIQ0aLC5w48gBQ/join Discord: https://ift.tt/2AZiVqD Support the content: https://ift.tt/2qsKFOO Twitter: https://twitter.com/sentdex Instagram: https://ift.tt/2J4Oa4h Facebook: https://ift.tt/1OI3cwB Twitch: https://ift.tt/2pcWGaq

T7. Capes Convolucionals - Xavier Giró - UPC ESEIAAT 2020


https://ift.tt/32OODCw L'objectiu de l'assignatura és el desenvolupament de xarxes neuronals profundes que permetin resoldre problemes d’intel·ligència artificial. Aquestes eines d’aprenentatge automàtic estimen els seus paràmetres a partir d’unes dades d’entrenament i un criteri d’optimització. L’assignatura presenta els tipus de capes més utilitzades en aquestes xarxes, així com els algoritmes i metodologies d’optimització més populars. Els estudiants seran capaços implementar-les en programari, així com monitoritzar el seu entrenament i diagnosticar quines accions poden millorar-ne el funcionament. El curs se centra en aplicacacions de xarxes neuronals profundes relacionades amb la gestió i distribució de senyals audiovisuals.

Tuesday, October 27, 2020

Finally, Deformation Simulation... in Real Time! 🚗


❤️ Check out Weights & Biases and sign up for a free demo here: https://ift.tt/2YuG7Yf ❤️ Their report about a previous paper is available here: https://ift.tt/2EjYXZH 📝 The paper "Detailed Rigid Body Simulation with Extended Position Based Dynamics" is available here: https://ift.tt/3jG1SLm Wish to see and hear the sound synthesis paper? - Our video: https://www.youtube.com/watch?v=rskdLEl05KI - Paper: https://ift.tt/37NzlB4 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://ift.tt/2icTBUb Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m

GDSA Capes Convolucionals II en línia 2020 10 27 at 08 14 GMT 7


https://ift.tt/32OODCw L'objectiu de l'assignatura és el desenvolupament de xarxes neuronals profundes que permetin resoldre problemes d’intel·ligència artificial. Aquestes eines d’aprenentatge automàtic estimen els seus paràmetres a partir d’unes dades d’entrenament i un criteri d’optimització. L’assignatura presenta els tipus de capes més utilitzades en aquestes xarxes, així com els algoritmes i metodologies d’optimització més populars. Els estudiants seran capaços implementar-les en programari, així com monitoritzar el seu entrenament i diagnosticar quines accions poden millorar-ne el funcionament. El curs se centra en aplicacacions de xarxes neuronals profundes relacionades amb la gestió i distribució de senyals audiovisuals.

Self-Training improves Pre-Training for Natural Language Understanding


This video explains a new paper that shows benefits by Self-Training after Language Modeling to improve the performance of RoBERTa-Large. The paper goes on to show Self-Training gains in Knowledge Distillation and Few-Shot Learning as well. They also introduce an interesting unlabeled data filtering algorithm, SentAugment that improves performance and reduces the computational cost of this kind of self-training looping. Thanks for watching! Please Subscribe! Paper Links: Paper Link: https://ift.tt/2JcWhzt Distributed Representations of Words and Phrases: https://ift.tt/1PAG0Kt Rethinking Pre-training and Self-training: https://ift.tt/2ULTfFp Don't Stop Pretraining: https://ift.tt/2WEdjdt Universal Sentence Encoder: https://ift.tt/2uwxVZJ Common Crawl Corpus: https://ift.tt/1St4m0m Fairseq: https://ift.tt/2K3FbUs BERT: https://ift.tt/2pMXn84 Noisy Student: https://ift.tt/2Q8GfYV POET: https://ift.tt/2xUnFwp PET - Small Language Models are Also Few-Shot Learners: https://ift.tt/3mGNGV1 Chapters:

Monday, October 26, 2020

Rethinking Attention with Performers (Paper Explained)


#ai #research #attention Transformers have huge memory and compute requirements because they construct an Attention matrix, which grows quadratically in the size of the input. The Reformer is a model that uses random positive orthogonal features to construct an unbiased estimator to the Attention matrix and obtains an arbitrarily good approximation in linear time! The method generalizes beyond attention and opens the door to the next generation of deep learning architectures. OUTLINE: 0:00 - Intro & Outline 6:15 - Quadratic Bottleneck in Attention Mechanisms 10:00 - Decomposing the Attention Matrix 15:30 - Approximating the Softmax Kernel 24:45 - Different Choices, Different Kernels 28:00 - Why the Naive Approach does not work! 31:30 - Better Approximation via Positive Features 36:55 - Positive Features are Infinitely Better 40:10 - Orthogonal Features are Even Better 43:25 - Experiments 49:20 - Broader Impact Statement 50:00 - Causal Attention via Prefix Sums 52:10 - Code 53:50 - Final Remarks & Conclusion Paper: https://ift.tt/2J91GYk Code: https://ift.tt/2HsNvgo Blog: https://ift.tt/2FRMNrY Abstract: We introduce Performers, Transformer architectures which can estimate regular (softmax) full-rank-attention Transformers with provable accuracy, but using only linear (as opposed to quadratic) space and time complexity, without relying on any priors such as sparsity or low-rankness. To approximate softmax attention-kernels, Performers use a novel Fast Attention Via positive Orthogonal Random features approach (FAVOR+), which may be of independent interest for scalable kernel methods. FAVOR+ can be also used to efficiently model kernelizable attention mechanisms beyond softmax. This representational power is crucial to accurately compare softmax with other kernels for the first time on large-scale tasks, beyond the reach of regular Transformers, and investigate optimal attention-kernels. Performers are linear architectures fully compatible with regular Transformers and with strong theoretical guarantees: unbiased or nearly-unbiased estimation of the attention matrix, uniform convergence and low estimation variance. We tested Performers on a rich set of tasks stretching from pixel-prediction through text models to protein sequence modeling. We demonstrate competitive results with other examined efficient sparse and dense attention methods, showcasing effectiveness of the novel attention-learning paradigm leveraged by Performers. Authors: Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, David Belanger, Lucy Colwell, Adrian Weller Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/2Zo6XRA If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

Inside TensorFlow: Building ML infra


In this episode of Inside TensorFlow, Software Engineer Mingsheng Hong presents building ML infra. Mingshen will be sharing some research and engineering problems in building machine learning infrastructure. Add the Inside TensorFlow playlist → https://goo.gle/Inside-TensorFlow Subscribe to the TensorFlow channel → https://goo.gle/TensorFlow

Neural Encoders and Decoders for Multimedia - Xavier Giro - ICMR 2020 Tutorial


https://ift.tt/34vSrtA Deep neural networks have boosted the convergence of multimedia data analytics in a unified framework shared by practitioners in natural language, vision and speech. Image captioning, lip reading or video sonorization are some of the first applications of a new and exciting field of research exploiting the generalization properties of deep neural representation. This tutorial will firstly review the basic neural architectures to encode and decode vision, text and audio, to later review the those models that have successfully translated information across modalities.

Saturday, October 24, 2020

Beautiful Elastic Simulations, Now Much Faster!


❤️ Check out Weights & Biases and sign up for a free demo here: https://ift.tt/2YuG7Yf ❤️ Their mentioned post is available here: https://ift.tt/3jxEks0 📝 The paper "IQ-MPM: An Interface Quadrature Material Point Method for Non-sticky Strongly Two-Way Coupled Nonlinear Solids and Fluids" is available here: https://ift.tt/3kuK6vB 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://ift.tt/2icTBUb Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://ift.tt/2TnVBd3 Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m

Friday, October 23, 2020

Let's Build Game Bots! LIVE


Unity's ML Agents toolkit has provided beginners with an easy, visual introduction to Machine Learning for a few years now. In this 2nd episode in my weekly live game show series, I'm going to use Unity to implement 2 algorithms from 2 research papers out of Berkeley and DeepMind's AI labs this month (October 2020). As I code, I'll ask you a series of relevant questions and you'll be able to answer them live via a link that I'll share during the stream. At the end, I'll announce one winner and they'll receive a cryptocurrency reward + I'll perform a song for them. I want each of these streams to be increasingly more interactive, higher stakes, more reward, and more fun. So I hope to see you there! Bookmark this link, set a reminder, game on! :)

Wednesday, October 21, 2020

Touch - Less by Anders Jessen - Made With TensorFlow.js


In our 5th episode of Made With TensorFlow.js we head to Denmark to join Anders Jessen, who has been investigating powerful touchless interfaces powered by our TensorFlow.js hand pose model. Finally, our sci-fi-like interaction dreams can become reality! Hosted by Jason Mayes, Developer Advocate for TensorFlow.js. Touch-Less Interfaces Live Demo by Anders Jessen → https://goo.gle/35JUVW0 Watch more of Made With TensorFlow.js → http://goo.gle/made-with-tfjs Subscribe to TensorFlow → https://goo.gle/TensorFlow #TensorFlow #TensorFlowJS #MadeWithTFJS #JavaScript #CreativeCoding #WebDev #HandPose #HCI #Touchless #HandTracking #HumanComputerInteraction #FutureTechnolog

Tuesday, October 20, 2020

This AI Creates An Adorable Baby DiCaprio Image!


❤️ Check out Weights & Biases and sign up for a free demo here: https://ift.tt/2YuG7Yf ❤️ Their report for this paper is available here: https://ift.tt/2FKcpqK 📝 The paper "In-Domain GAN Inversion for Real Image Editing" is available here: https://ift.tt/3m5tKdj Check out the research group's other works, there is lots of cool stuff there: https://ift.tt/2FKchYi 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://ift.tt/2icTBUb Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m

Vokenization Explained!


This video explains a new approach to Visually supervise Language models that achieves performance gains on Language-Only tasks like the GLUE benchmark and SQuAD question answering. This is done by constructing a token-image matching (vokens) and classifying corresponding tokens with a a weakly supervised loss function. Thanks for watching! Please Subscribe! Paper Links: Vokenization: https://ift.tt/3lYmiAy ImageBERT: https://ift.tt/398zzAe VilBERT: https://ift.tt/3dKAWbD LXMERT: https://ift.tt/31lEAE0 UNITER: https://ift.tt/31r8Du6 Visual Genome: https://ift.tt/1lVDTtg 12-in-1: Multi-task Vision and Language Representation Learning: https://ift.tt/2H7VZcD How Context Affects Language Models' Factual Predictions: https://ift.tt/3o2SrZA Fusion of medical imaging and electronic health records using deep learning: a systematic review and implementation guidelines: https://ift.tt/3dBurrA ConVIRT: https://ift.tt/2IRJsKV Climbing towards NLU: https://ift.tt/2IRJsKV Weak Supervision: A New Programming Paradigm for Machine Learning: https://ift.tt/2Tt0Bim Thanks for watching! Chapters 0:00 Introduction 1:16 Idea of Vision-Language Models 2:40 Overview of Vokenization 3:38 Voken Examples 4:45 Weak Supervision 6:00 Image Retrieval for Supervision 7:47 What is Grounded Language? 8:25 Issues with Existing Datasets 10:28 Exciting Results for Vision-Language! 13:07 Multi-Modal Learning 14:45 On Meaing, Form, and Understanding 16:04 Information Retrieval in NLP

Sunday, October 18, 2020

Who Wants to Be a Code Millionaire? ft. Kamil Debowski


Competitive Programming is a mind-sport involving solving timed math & programming puzzles. Kamil Debowski is one of the world's top competitive programmers. In this spinoff of "Who Wants to be a Millionaire" I invited Kamil to solve 5 programming questions in order to win a million points (not dollars), which can be traded in for a signed copy of my book Decentralized Applications and an unreleased song. The programming questions from CodeChef normally require code input, but i've modified them to be multiple choice by creating several possible code snippets. The math & machine learning questions are my own. Follow along with Kamil and see if you can make it to the end without getting an incorrect answer, enjoy! Please Subscribe! It means a lot to me. Twitter: https://twitter.com/sirajraval Instagram: https://ift.tt/2GjSKOL Facebook: https://ift.tt/2hCqHdY Linkedin: https://ift.tt/2NCjBnW Website: www.sirajraval.com Email: hello@sirajraval.com Kamil's Channel: @Errichto Learn Machine Learning in 3 Months: https://www.youtube.com/watch/Cr6VqTRO1v0 Learn Data Science in 3 Months: https://www.youtube.com/watch?v=9rDhY1P3YLA Check out any of the free playlists on my channels if you want to learn machine learning & other computer science topics in a really easy and fast way.

Regressió Lineal i Logística amb el PyTorch - Xavier Giró - UPC ESEIAAT 2020


https://ift.tt/32OODCw L'objectiu de l'assignatura és el desenvolupament de xarxes neuronals profundes que permetin resoldre problemes d’intel·ligència artificial. Aquestes eines d’aprenentatge automàtic estimen els seus paràmetres a partir d’unes dades d’entrenament i un criteri d’optimització. L’assignatura presenta els tipus de capes més utilitzades en aquestes xarxes, així com els algoritmes i metodologies d’optimització més populars. Els estudiants seran capaços implementar-les en programari, així com monitoritzar el seu entrenament i diagnosticar quines accions poden millorar-ne el funcionament. El curs se centra en aplicacacions de xarxes neuronals profundes relacionades amb la gestió i distribució de senyals audiovisuals.

Saturday, October 17, 2020

LambdaNetworks: Modeling long-range Interactions without Attention (Paper Explained)


#ai #research #attention Transformers, having already captured NLP, have recently started to take over the field of Computer Vision. So far, the size of images as input has been challenging, as the Transformers' Attention Mechanism's memory requirements grows quadratic in its input size. LambdaNetworks offer a way around this requirement and capture long-range interactions without the need to build expensive attention maps. They reach a new state-of-the-art in ImageNet and compare favorably to both Transformers and CNNs in terms of efficiency. OUTLINE: 0:00 - Introduction & Overview 6:25 - Attention Mechanism Memory Requirements 9:30 - Lambda Layers vs Attention Layers 17:10 - How Lambda Layers Work 31:50 - Attention Re-Appears in Lambda Layers 40:20 - Positional Encodings 51:30 - Extensions and Experimental Comparisons 58:00 - Code Paper: https://ift.tt/3liippU Lucidrains' Code: https://ift.tt/3dyrzMl Abstract: We present a general framework for capturing long-range interactions between an input and structured contextual information (e.g. a pixel surrounded by other pixels). Our method, called the lambda layer, captures such interactions by transforming available contexts into linear functions, termed lambdas, and applying these linear functions to each input separately. Lambda layers are versatile and may be implemented to model content and position-based interactions in global, local or masked contexts. As they bypass the need for expensive attention maps, lambda layers can routinely be applied to inputs of length in the thousands, en-abling their applications to long sequences or high-resolution images. The resulting neural network architectures, LambdaNetworks, are computationally efficient and simple to implement using direct calls to operations available in modern neural network libraries. Experiments on ImageNet classification and COCO object detection and instance segmentation demonstrate that LambdaNetworks significantly outperform their convolutional and attentional counterparts while being more computationally efficient. Finally, we introduce LambdaResNets, a family of LambdaNetworks, that considerably improve the speed-accuracy tradeoff of image classification models. LambdaResNets reach state-of-the-art accuracies on ImageNet while being ∼4.5x faster than the popular EfficientNets on modern machine learning accelerators. Authors: Anonymous Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/2Zo6XRA If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

Minecraft, but on a Quantum Computer


Support this channel & check out Qiskit: http://qisk.it/jabrils Quantum computers are right around the corner, so the question must be asked, what could a game like minecraft look like on a quantum computer? We teamed up with @Qiskit, who has public Quantum Computers to use to give this a shot. Qiskit Textbook: https://bit.ly/31cYS2E Qiskit Medium Article: https://bit.ly/3kaj6kT SUBSCRIBE FOR MORE: http://jabrils.com/yt WISHLIST MY VIDEO GAME: https://ift.tt/33NgHFz SUPPORT ON PATREON: https://ift.tt/2pZACkg JOIN DISCORD: https://ift.tt/2QkDa9O Please follow me on social networks: twitter: https://twitter.com/jabrils_ instagram: https://ift.tt/2QNVYvI REMEMBER TO ALWAYS FEED YOUR CURIOSITY

Friday, October 16, 2020

This Is What Simulating a 100 Million Particles Looks Like!


❤️ Check out Weights & Biases and sign up for a free demo here: https://ift.tt/2YuG7Yf ❤️ Their mentioned instrumentation is available here: https://ift.tt/3dqc0G9 Our Instagram page with the slow-motion footage is available here: https://ift.tt/2KBCNkT 📝 The paper "A Massively Parallel and Scalable Multi-GPU Material Point Method " is available here: https://ift.tt/3hwFWl2 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://ift.tt/2icTBUb Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://ift.tt/2TnVBd3 Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m

Thursday, October 15, 2020

Funcions de pèrdues - Xavier Giró - UPC ESEIAAT Terrassa 2020


L'objectiu de l'assignatura és el desenvolupament de xarxes neuronals profundes que permetin resoldre problemes d’intel·ligència artificial. Aquestes eines d’aprenentatge automàtic estimen els seus paràmetres a partir d’unes dades d’entrenament i un criteri d’optimització. L’assignatura presenta els tipus de capes més utilitzades en aquestes xarxes, així com els algoritmes i metodologies d’optimització més populars. Els estudiants seran capaços implementar-les en programari, així com monitoritzar el seu entrenament i diagnosticar quines accions poden millorar-ne el funcionament. El curs se centra en aplicacacions de xarxes neuronals profundes relacionades amb la gestió i distribució de senyals audiovisuals.

Tuesday, October 13, 2020

Remove This! ✂️ AI-Based Video Completion is Amazing!


❤️ Check out Lambda here and sign up for their GPU Cloud: https://ift.tt/35NkCT7 📝 The paper "Flow-edge Guided Video Completion" is available here: https://ift.tt/2QU8TfB 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://ift.tt/2icTBUb Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m

A World of Tensors with PyTorch - Luis Salgueiro - UPC TelecomBCN Barcelona 2020


https://ift.tt/3iQn41h Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.

Sunday, October 11, 2020

Descending through a Crowded Valley -- Benchmarking Deep Learning Optimizers (Paper Explained)


#ai #research #optimization Deep Learning famously gives rise to very complex, non-linear optimization problems that cannot be solved analytically. Therefore, the choice of a suitable optimization algorithm can often make or break the training of a Deep Neural Network. Yet, the literature is full with hundreds of different algorithms, each claiming to be superior and selecting one of them is mostly done based on popular opinion or anecdotes. This paper investigates 14 of the most popular optimizers in a standardized benchmark and even though there is no clear winner, it can give some recommendations as a result. OUTLINE: 0:00 - Introduction & Overview 2:15 - The Overwhelming Amount of Optimizers 5:50 - Compared Optimizers 6:50 - Default Parameters & Tuning Distribution 13:10 - Deep Learning Problems Considered 16:45 - Tuning on Single Seeds 23:15 - Results & Interpretation 34:00 - Learning Rate Schedules & Noise 36:10 - Conclusions & Comments Paper: https://ift.tt/33J4Jy6 Raw Results: https://ift.tt/3nAthS5 Abstract: Choosing the optimizer is considered to be among the most crucial design decisions in deep learning, and it is not an easy one. The growing literature now lists hundreds of optimization methods. In the absence of clear theoretical guidance and conclusive empirical evidence, the decision is often made based on anecdotes. In this work, we aim to replace these anecdotes, if not with a conclusive ranking, then at least with evidence-backed heuristics. To do so, we perform an extensive, standardized benchmark of more than a dozen particularly popular deep learning optimizers while giving a concise overview of the wide range of possible choices. Analyzing almost 35,000 individual runs, we contribute the following three points: (i) Optimizer performance varies greatly across tasks. (ii) We observe that evaluating multiple optimizers with default parameters works approximately as well as tuning the hyperparameters of a single, fixed optimizer. (iii) While we can not discern an optimization method clearly dominating across all tested tasks, we identify a significantly reduced subset of specific algorithms and parameter choices that generally lead to competitive results in our experiments. This subset includes popular favorites and some lesser-known contenders. We have open-sourced all our experimental results, making them directly available as challenging and well-tuned baselines. This allows for more meaningful comparisons when evaluating novel optimization methods without requiring any further computational efforts. Authors: Robin M. Schmidt, Frank Schneider, Philipp Hennig Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/2Zo6XRA If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

Saturday, October 10, 2020

Enhance! Neural Supersampling is Here! 🔎


❤️ Check out Weights & Biases and sign up for a free demo here: https://ift.tt/2YuG7Yf ❤️ Their mentioned post is available here: https://ift.tt/34LpKrg 📝 The paper "Neural Supersampling for Real-time Rendering" is available here: https://ift.tt/2YQahVm https://ift.tt/3iHt7o8 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://ift.tt/2icTBUb Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m

Diferenciació automàtica amb el PyTorch - Xavier Giró - UPC ESEIAAT 2020


L'objectiu de l'assignatura és el desenvolupament de xarxes neuronals profundes que permetin resoldre problemes d’intel·ligència artificial. Aquestes eines d’aprenentatge automàtic estimen els seus paràmetres a partir d’unes dades d’entrenament i un criteri d’optimització. L’assignatura presenta els tipus de capes més utilitzades en aquestes xarxes, així com els algoritmes i metodologies d’optimització més populars. Els estudiants seran capaços implementar-les en programari, així com monitoritzar el seu entrenament i diagnosticar quines accions poden millorar-ne el funcionament. El curs se centra en aplicacacions de xarxes neuronals profundes relacionades amb la gestió i distribució de senyals audiovisuals.

Perceptrons Multicapa - Xavier Giró - UPC ESEIAAT Terrassa 2020


L'objectiu de l'assignatura és el desenvolupament de xarxes neuronals profundes que permetin resoldre problemes d’intel·ligència artificial. Aquestes eines d’aprenentatge automàtic estimen els seus paràmetres a partir d’unes dades d’entrenament i un criteri d’optimització. L’assignatura presenta els tipus de capes més utilitzades en aquestes xarxes, així com els algoritmes i metodologies d’optimització més populars. Els estudiants seran capaços implementar-les en programari, així com monitoritzar el seu entrenament i diagnosticar quines accions poden millorar-ne el funcionament. El curs se centra en aplicacacions de xarxes neuronals profundes relacionades amb la gestió i distribució de senyals audiovisuals.

Regressió Softmax - Xavier Giró - UPC ESEIAAT Terrassa 2020


L'objectiu de l'assignatura és el desenvolupament de xarxes neuronals profundes que permetin resoldre problemes d’intel·ligència artificial. Aquestes eines d’aprenentatge automàtic estimen els seus paràmetres a partir d’unes dades d’entrenament i un criteri d’optimització. L’assignatura presenta els tipus de capes més utilitzades en aquestes xarxes, així com els algoritmes i metodologies d’optimització més populars. Els estudiants seran capaços implementar-les en programari, així com monitoritzar el seu entrenament i diagnosticar quines accions poden millorar-ne el funcionament. El curs se centra en aplicacacions de xarxes neuronals profundes relacionades amb la gestió i distribució de senyals audiovisuals.

Wednesday, October 7, 2020

WashOS and Splat with Charlie Gerard - Made With TensorFlow.js


Our 3rd episode of Made With TensorFlow.js heads to Amsterdam to join Charlie Gerard, a Senior Front End Developer at Netlify to talk about her latest creations. Join us as Charlie walks us through WashOS - a web based system that can detect how long you have been washing your hands for, and “splat”, a fruit ninja styled game powered by TensorFlow.js that enables you to use your hands and arms to chop fruit from anywhere you wish! Hosted by Jason Mayes, Developer Advocate for TensorFlow.js. WashOS → https://goo.gle/32HDHHb Splat → https://goo.gle/2ZGc1QQ Watch more episodes of Made With TensorFlow.js → http://goo.gle/made-with-tfjs Subscribe to TensorFlow Youtube Channel → https://goo.gle/TensorFlow #TensorFlow #TensorFlowJS #MadeWithTFJS #JavaScript #CreativeCoding #WebDev #SoundDetection #Gaming #HCI

Retrieval-Augmented Generation (RAG)


This video explains the Retrieval-Augmented Generation (RAG) model! This approach combines Dense Passage Retrieval with a Seq2Seq BART generator. This is tested out on knowledge intensive tasks like open-domain QA, jeopardy question generation, and FEVER fact verification. This looks like a really interesting paradigm for building language models that produce factually accurate generations! Thanks for watching! Please Subscribe! Paper Links: Original Paper: https://ift.tt/33bHTyO FB Blog Post (Animation used in Intro): https://ift.tt/3kVI4og HuggingFace RAG description: https://ift.tt/3iIO3eh Billion-scale similarity search with GPUs: https://ift.tt/2m8YPRc Language Models as Knowledge Bases? https://ift.tt/2MTGupS REALM: Retrieval-Augmented Language Models: https://ift.tt/2PeBxr4 Dense Passage Retrieval: https://ift.tt/3npz7FC FEVER: https://ift.tt/2F9QadF Natural Questions: https://ift.tt/3izTD2B TriviaQA: https://ift.tt/2SAI2WE MS MARCO: https://ift.tt/2GrcNuR Thanks for watching! Time Stamps 0:00 Introduction 2:05 Limitations of Language Models 4:10 Algorithm Walkthrough 5:48 Dense Passage Retrieval 7:44 RAG-Token vs. RAG-Sequence 10:47 Off-the-Shelf Models 11:54 Experiment Datasets 15:03 Results vs. T5 16:16 BART vs. RAG - Jeopardy Questions 17:20 Impact of Retrieved Documents zi 18:53 Ablation Study 20:25 Retrieval Collapse 21:10 Knowledge Graphs as Non-Parametric Memory 21:45 Can we learn better representations for the Document Index? 22:12 How will Efficient Transformers impact this?

Tuesday, October 6, 2020

This AI Can Deal With Body Shape Variation!


❤️ Check out Weights & Biases and sign up for a free demo here: https://ift.tt/2YuG7Yf ❤️ Their mentioned instrumentation is available here: https://ift.tt/39jebIZ 📝 The paper "Learning Body Shape Variation in Physics-based Characters" is available here: https://ift.tt/3jDZV2v 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://ift.tt/2icTBUb Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m

Sunday, October 4, 2020

An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (Paper Explained)


#ai #research #transformers Transformers are Ruining Convolutions. This paper, under review at ICLR, shows that given enough data, a standard Transformer can outperform Convolutional Neural Networks in image recognition tasks, which are classically tasks where CNNs excel. In this Video, I explain the architecture of the Vision Transformer (ViT), the reason why it works better and rant about why double-bline peer review is broken. OUTLINE: 0:00 - Introduction 0:30 - Double-Blind Review is Broken 5:20 - Overview 6:55 - Transformers for Images 10:40 - Vision Transformer Architecture 16:30 - Experimental Results 18:45 - What does the Model Learn? 21:00 - Why Transformers are Ruining Everything 27:45 - Inductive Biases in Transformers 29:05 - Conclusion & Comments Paper (Under Review): https://ift.tt/3d5ytIR BiT Paper: https://ift.tt/3cO0aop ImageNet-ReaL Paper: https://ift.tt/31jTcoo My Video on BiT (Big Transfer): https://youtu.be/k1GOF2jmX7c My Video on Transformers: https://youtu.be/iDulhoQ2pro My Video on BERT: https://youtu.be/-9evrZnBorM My Video on ResNets: https://youtu.be/GWt6Fu05voI Abstract: While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer can perform very well on image classification tasks when applied directly to sequences of image patches. When pre-trained on large amounts of data and transferred to multiple recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc), Vision Transformer attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train. Authors: Anonymous / Under Review Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/2Zo6XRA If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

Saturday, October 3, 2020

Training more effective learned optimizers, and using them to train themselves (Paper Explained)


#ai #research #optimization Optimization is still the domain of hand-crafted, simple algorithms. An ML engineer not only has to pick a suitable one for their problem but also often do grid-search over various hyper-parameters. This paper proposes to learn a single, unified optimization algorithm, given not by an equation, but by an LSTM-based neural network, to act as an optimizer for any deep learning problem, and ultimately to optimize itself. OUTLINE: 0:00 - Intro & Outline 2:20 - From Hand-Crafted to Learned Features 4:25 - Current Optimization Algorithm 9:40 - Learned Optimization 15:50 - Optimizer Architecture 22:50 - Optimizing the Optimizer using Evolution Strategies 30:30 - Task Dataset 34:00 - Main Results 36:50 - Implicit Regularization in the Learned Optimizer 41:05 - Generalization across Tasks 41:40 - Scaling Up 45:30 - The Learned Optimizer Trains Itself 47:20 - Pseudocode 49:45 - Broader Impact Statement 52:55 - Conclusion & Comments Paper: https://ift.tt/3cpeVP4 Abstract: Much as replacing hand-designed features with learned functions has revolutionized how we solve perceptual tasks, we believe learned algorithms will transform how we train models. In this work we focus on general-purpose learned optimizers capable of training a wide variety of problems with no user-specified hyperparameters. We introduce a new, neural network parameterized, hierarchical optimizer with access to additional features such as validation loss to enable automatic regularization. Most learned optimizers have been trained on only a single task, or a small number of tasks. We train our optimizers on thousands of tasks, making use of orders of magnitude more compute, resulting in optimizers that generalize better to unseen tasks. The learned optimizers not only perform well, but learn behaviors that are distinct from existing first order optimizers. For instance, they generate update steps that have implicit regularization and adapt as the problem hyperparameters (e.g. batch size) or architecture (e.g. neural network width) change. Finally, these learned optimizers show evidence of being useful for out of distribution tasks such as training themselves from scratch. Authors: Luke Metz, Niru Maheswaranathan, C. Daniel Freeman, Ben Poole, Jascha Sohl-Dickstein Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB Parler: https://ift.tt/38tQU7C LinkedIn: https://ift.tt/2Zo6XRA If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://ift.tt/2DuKOZ3 Patreon: https://ift.tt/390ewRH Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

Beautiful Results From 30 Years Of Light Transport Simulation ☀️


❤️ Check out Lambda here and sign up for their GPU Cloud: https://ift.tt/35NkCT7 📝 The paper "Specular Manifold Sampling for Rendering High-Frequency Caustics and Glints" is available here: https://ift.tt/2Gy4mNW My rendering course is available here, and is free for everyone: https://ift.tt/2rdtvDu The PostDoc call is available here - https://ift.tt/2EX5BFP Mitsuba Renderer: https://ift.tt/1Jo76pN Also check out Blender and Cycles! - https://ift.tt/1IscRzJ Credits: The test scenes use textures from CC0 Textures and cgbook-case, and are lit by environment maps courtesy of HDRI Havenand Paul Debevec. Kettle: Blend Swap user PrinterKiller. 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://ift.tt/2icTBUb Károly Zsolnai-Fehér's links: Instagram: https://ift.tt/2KBCNkT Twitter: https://twitter.com/twominutepapers Web: https://ift.tt/1NwkG9m

Thursday, October 1, 2020

How I Gamify Learning


In the past few months, I've significantly improved my game development skills, athletic capability, & friendships. These are not easy goals, but I've codified my daily routine into 3 games that make achieving them not only possible, but really fun. In this video tutorial, I'm going to explain how each of my games works so that you too can learn HOW to learn joyfully and efficiently, as I do. Subscribe for more educational videos! It means so much to me. TWITTER: https://twitter.com/sirajraval INSTAGRAM: https://ift.tt/2GjSKOL Katie's Unreal Playlist: https://www.youtube.com/watch?v=iTwxuahe5B4&list=PLHSMxXn4v-aGhuRxxSBVPqykMjDiRyGrJ Piotr's Symbol Recognizer Plugin: https://ift.tt/30pqj94 My Learning Plan ( also below) : https://ift.tt/3njcfHY Game 1: Avoid the Hooks Goal - Dedicate 60 hours/week to learning a subject Feedback System - Pomodoro Score Rules - Find or Build the idea gateway - Dedicated time-boxed discovery time - No watching ads - Let go of resentment - No overstimulated salt, sugar, fat, oil - No excess sex or drugs (i am not perfect on this one yet lol) Tools: - Sticky Notes (Life Planning, 60 hours a week of doing a project) - Simplify.so (ELI5 explanations) - AdBlockers - App Usage Chart, add to homescreen - Grayscale mode - Pen and Notebook - Pomodoro Timer Game 2: Contact Free Circuit Goal - Achieve mastery of Echo or any other Contact Free Sport (eBike, app for competitive cardio) Feedback System - Win Rate Rules - 3x 20 bicep pullups, 3x 20 arm curls, 3x 20 lower back raises, 3x 30 upper back raises, 3x 20 leg ups - 3x 20 tricep pullups, 3x 20 chest press, 3x 20 dips, 3x 20 pushups, 3x 20 leg ups - 3x 20 squats, 3x 20 squats, 3x 20 goblin squat, 3x 20 quad flexes, 3x 20 leg ups - 3x 20 bicep pullups, 3x 20 tricep pullups, 3x 20 dips, 3x 30 pushups, 3x 20 leg ups, farmer walks - 15 minute HIIT 2x - 1 hour echo daily - Avocados, beets, carrots, apples, kale, protein, peanut butter, cinnamon smoothie - Grow Vegetables - 4 eggs - Drink glass of water every morning - Meat + salsa + wheat tortilla taco Tools - Any Computing Device - 2 water weights - Pullup set Game 3: Explore the Space Goal - Strengthen a social bond Feedback System - Enjoyment Rules: - Fortnite - Pokemon Go - Geocaching Tools - Any Computing Device Song Used: Im not Okay by My Chemical Romance & Holiday by Green Day

I Made An Actual Infinite Stairs, It's not what you think..


The other day I ran into a sketch of an "impossible staircase", but nothing is impossible, not even making infinite stair cases 😉 Play with the Infinite Staircase Web Tool: https://bit.ly/3jqB7LC Download Infinite Staircase Model for 3D Printing: https://bit.ly/36mNzrL SUBSCRIBE FOR MORE: http://jabrils.com/yt WISHLIST MY VIDEO GAME: https://ift.tt/33NgHFz SUPPORT ON PATREON: https://ift.tt/2pZACkg JOIN DISCORD: https://ift.tt/2QkDa9O Please follow me on social networks: twitter: https://twitter.com/jabrils_ instagram: https://ift.tt/2QNVYvI REMEMBER TO ALWAYS FEED YOUR CURIOSITY