Resource of free step by step video how to guides to get you started with machine learning.
Wednesday, March 23, 2022
Tutorial 4: HBM System and Architecture for AI applications
Tutorial 4: HBM System and Architecture for AI applications Speakers: Manish Jain and Nikhil Raghavendra Rao (Rambus) Tutorial Abstract: Artificial intelligence/machine learning (AI/ML) is impacting every industry and touching the lives of everyone. AI/ML’s evolution is proceeding at a lightning pace. Training capabilities are growing at a rate of 10X per year driving rapid improvements in every aspect of computing hardware and software. Memory bandwidth is one such critical area of focus enabling the continued growth of AI/ML. HBM memory is the ideal solution for the high bandwidth requirements of AI/ML training. The benefits of HBM make it the superior choice. The performance is outstanding, and higher implementation and manufacturing costs can be traded off against savings of board space and power. In data center environments, its lower power translates to reduced heat loads for an environment where cooling is often one of the top operating costs. This tutorial starts with the discussion on AI/ML driving high memory bandwidth requirements and creating need for innovative memory solutions (2.5D). It then describes HBM system components and its role. HBM performance evolution and how data rates have increased with each generation is discussed. It talks about HBM memory features, HBM PHY design and architecture key points, with emphasis on HBM3 designs. It covers system design, signal integrity and power integrity considerations for a HBM memory subsystem. About Speakers: Manish Jain is Senior Director of Engineering at Rambus, Bangalore. He is responsible for Serdes and Memory PHY development. He has spent 26 years in Semiconductor Industry, with over 17 years spanning at Rambus. He has designed Non-Volatile Memories, Analog Circuits and High-speed Analog Mixed Signal Circuits. His research interests include high-speed mixed-signal CMOS circuit design, transmitter and receiver design, equalization, PLL/DLL design and signal integrity analysis. He has received his B.E Degree in Electronics and Communication Engineering from Maulana Azad National Institute of Technology, Bhopal in 1995 Nikhil Raghavendra Rao is a Principal Engineer, Architecture at Rambus Bangalore. He is responsible for Memory PHY Architecture. He has 17 years of experience in the semiconductor industry and has worked primarily on Digital Design and Verification, SoC Design, FPGA prototyping, Graphics Memory (GDDR) PHY and HBM PHY Architecture. His research interests include Memory subsystem architecture, CPU architectures and Computer networking. He holds a Bachelors degree in Electrical Engineering from NIT Surat and a Master’s degree from Manipal University.
Subscribe to:
Post Comments (Atom)
-
JavaやC++で作成された具体的なルールに従って動く従来のプログラムと違い、機械学習はデータからルール自体を推測するシステムです。機械学習は具体的にどのようなコードで構成されているでしょうか? 機械学習ゼロからヒーローへの第一部ではそのような疑問に応えるため、ガイドのチャー...
-
Using GPUs in TensorFlow, TensorBoard in notebooks, finding new datasets, & more! (#AskTensorFlow) [Collection] In a special live ep...
-
#minecraft #neuralnetwork #backpropagation I built an analog neural network in vanilla Minecraft without any mods or command blocks. The n...
-
Using More Data - Deep Learning with Neural Networks and TensorFlow part 8 [Collection] Welcome to part eight of the Deep Learning with ...
-
Linear Algebra Tutorial on the Determinant of a Matrix 🤖Welcome to our Linear Algebra for AI tutorial! This tutorial is designed for both...
-
STUMPY is a robust and scalable Python library for computing a matrix profile, which can create valuable insights about our time series. STU...
-
❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers 📝 The paper "Alias-Free GAN" is available here: h...
-
Why are humans so good at video games? Maybe it's because a lot of games are designed with humans in mind. What happens if we change t...
-
Visual scenes are often comprised of sets of independent objects. Yet, current vision models make no assumptions about the nature of the p...
-
#ai #attention #transformer #deeplearning Transformers are famous for two things: Their superior performance and their insane requirements...
No comments:
Post a Comment