webinar register page

Webinar banner
CEDA VDL Series: Abu Sebastian-In-Memory Computing for a More Efficient and General AI
Title: In-Memory Computing for a More Efficient and General AI
Abstract: In this lecture, I will talk about two overarching research goals we have been pursuing for several years. The first goal is to explore the limits of energy per operation when running AI algorithms such as deep learning (DL). In-memory computing (IMC) is a non-von Neumann compute paradigm that keeps alive the promise for 1fJ/Operation for DL. Attributes such as synaptic efficacy and plasticity can be implemented in place by exploiting the physical attributes of memory devices such as phase-change memory. I will provide an overview of the most advanced IMC chips based on phase-change memory integrated in 14nm CMOS technology node. The second goal is to develop algorithmic and architectural building blocks for a more efficient and general AI. I will introduce the paradigm of neuro-vector symbolic architecture (NVSA) that could address problems such as continual learning and visual abstract reasoning. I will also showcase the role of IMC in realizing some of the critical compute blocks for NVSA.

Jan 13, 2023 10:00 AM in Eastern Time (US and Canada)

Webinar logo
Webinar is over, you cannot register now. If you have any questions, please contact Webinar host: .