Resource of free step by step video how to guides to get you started with machine learning.
Sunday, June 21, 2020
SIREN: Implicit Neural Representations with Periodic Activation Functions (Paper Explained)
Implicit neural representations are created when a neural network is used to represent a signal as a function. SIRENs are a particular type of INR that can be applied to a variety of signals, such as images, sound, or 3D shapes. This is an interesting departure from regular machine learning and required me to think differently. OUTLINE: 0:00 - Intro & Overview 2:15 - Implicit Neural Representations 9:40 - Representing Images 14:30 - SIRENs 18:05 - Initialization 20:15 - Derivatives of SIRENs 23:05 - Poisson Image Reconstruction 28:20 - Poisson Image Editing 31:35 - Shapes with Signed Distance Functions 45:55 - Paper Website 48:55 - Other Applications 50:45 - Hypernetworks over SIRENs 54:30 - Broader Impact Paper: https://ift.tt/2NcBNpo Website: https://ift.tt/2CeIu7T Abstract: Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm, offering many possible benefits over conventional representations. However, current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal's spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations. We propose to leverage periodic activation functions for implicit neural representations and demonstrate that these networks, dubbed sinusoidal representation networks or Sirens, are ideally suited for representing complex natural signals and their derivatives. We analyze Siren activation statistics to propose a principled initialization scheme and demonstrate the representation of images, wavefields, video, sound, and their derivatives. Further, we show how Sirens can be leveraged to solve challenging boundary value problems, such as particular Eikonal equations (yielding signed distance functions), the Poisson equation, and the Helmholtz and wave equations. Lastly, we combine Sirens with hypernetworks to learn priors over the space of Siren functions. Authors: Vincent Sitzmann, Julien N. P. Martel, Alexander W. Bergman, David B. Lindell, Gordon Wetzstein Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ift.tt/3dJpBrR BitChute: https://ift.tt/38iX6OV Minds: https://ift.tt/37igBpB
Subscribe to:
Post Comments (Atom)
-
JavaやC++で作成された具体的なルールに従って動く従来のプログラムと違い、機械学習はデータからルール自体を推測するシステムです。機械学習は具体的にどのようなコードで構成されているでしょうか? 機械学習ゼロからヒーローへの第一部ではそのような疑問に応えるため、ガイドのチャー...
-
Using GPUs in TensorFlow, TensorBoard in notebooks, finding new datasets, & more! (#AskTensorFlow) [Collection] In a special live ep...
-
#minecraft #neuralnetwork #backpropagation I built an analog neural network in vanilla Minecraft without any mods or command blocks. The n...
-
Using More Data - Deep Learning with Neural Networks and TensorFlow part 8 [Collection] Welcome to part eight of the Deep Learning with ...
-
Linear Algebra Tutorial on the Determinant of a Matrix 🤖Welcome to our Linear Algebra for AI tutorial! This tutorial is designed for both...
-
STUMPY is a robust and scalable Python library for computing a matrix profile, which can create valuable insights about our time series. STU...
-
❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers 📝 The paper "Alias-Free GAN" is available here: h...
-
Why are humans so good at video games? Maybe it's because a lot of games are designed with humans in mind. What happens if we change t...
-
Visual scenes are often comprised of sets of independent objects. Yet, current vision models make no assumptions about the nature of the p...
-
#ai #attention #transformer #deeplearning Transformers are famous for two things: Their superior performance and their insane requirements...
No comments:
Post a Comment