Mamba, Mamba-2 and Post-Transformer Architectures for Generative AI with Albert Gu - #693

Mamba, Mamba-2 and Post-Transformer Architectures for Generative AI with Albert Gu - #693

Today, we're joined by Albert Gu, assistant professor at Carnegie Mellon University, to discuss his research on post-transformer architectures for multi-modal foundation models, with a focus on state-space models in general and Albert’s recent Mamba and Mamba-2 papers in particular. We dig into the efficiency of the attention mechanism and its limitations in handling high-resolution perceptual modalities, and the strengths and weaknesses of transformer architectures relative to alternatives for various tasks. We dig into the role of tokenization and patching in transformer pipelines, emphasizing how abstraction and semantic relationships between tokens underpin the model's effectiveness, and explore how this relates to the debate between handcrafted pipelines versus end-to-end architectures in machine learning. Additionally, we touch on the evolving landscape of hybrid models which incorporate elements of attention and state, the significance of state update mechanisms in model adaptability and learning efficiency, and the contribution and adoption of state-space models like Mamba and Mamba-2 in academia and industry. Lastly, Albert shares his vision for advancing foundation models across diverse modalities and applications. The complete show notes for this episode can be found at https://twimlai.com/go/693.

Episoder(779)

Stable Diffusion and LLMs at the Edge with Jilei Hou - #633

Stable Diffusion and LLMs at the Edge with Jilei Hou - #633

Today we’re joined by Jilei Hou, a VP of Engineering at Qualcomm Technologies. In our conversation with Jilei, we focus on the emergence of generative AI, and how they've worked towards providing thes...

12 Jun 202340min

Modeling Human Behavior with Generative Agents with Joon Sung Park - #632

Modeling Human Behavior with Generative Agents with Joon Sung Park - #632

Today we’re joined by Joon Sung Park, a PhD Student at Stanford University. Joon shares his passion for creating AI systems that can solve human problems and his work on the recent paper Generative Ag...

5 Jun 202346min

Towards Improved Transfer Learning with Hugo Larochelle - #631

Towards Improved Transfer Learning with Hugo Larochelle - #631

Today we’re joined by Hugo Larochelle, a research scientist at Google Deepmind. In our conversation with Hugo, we discuss his work on transfer learning, understanding the capabilities of deep learning...

29 Mai 202338min

Language Modeling With State Space Models with Dan Fu - #630

Language Modeling With State Space Models with Dan Fu - #630

Today we’re joined by Dan Fu, a PhD student at Stanford University. In our conversation with Dan, we discuss the limitations of state space models in language modeling and the search for alternative b...

22 Mai 202328min

Building Maps and Spatial Awareness in Blind AI Agents with Dhruv Batra - #629

Building Maps and Spatial Awareness in Blind AI Agents with Dhruv Batra - #629

Today we continue our coverage of ICLR 2023 joined by Dhruv Batra, an associate professor at Georgia Tech and research director of the Fundamental AI Research (FAIR) team at META. In our conversation,...

15 Mai 202343min

AI Agents and Data Integration with GPT and LLaMa with Jerry Liu - #628

AI Agents and Data Integration with GPT and LLaMa with Jerry Liu - #628

Today we’re joined by Jerry Liu, co-founder and CEO of Llama Index. In our conversation with Jerry, we explore the creation of Llama Index, a centralized interface to connect your external data with t...

8 Mai 202341min

Hyperparameter Optimization through Neural Network Partitioning with Christos Louizos - #627

Hyperparameter Optimization through Neural Network Partitioning with Christos Louizos - #627

Today we kick off our coverage of the 2023 ICLR conference joined by Christos Louizos, an ML researcher at Qualcomm Technologies. In our conversation with Christos, we explore his paper Hyperparameter...

1 Mai 202333min

Are LLMs Overhyped or Underappreciated? with Marti Hearst - #626

Are LLMs Overhyped or Underappreciated? with Marti Hearst - #626

Today we’re joined by Marti Hearst, Professor at UC Berkeley. In our conversation with Marti, we explore the intricacies of AI language models and their usefulness in improving efficiency but also the...

24 Apr 202337min

Populært innen Politikk og nyheter

giver-og-gjengen-vg
aftenpodden-usa
aftenpodden
stopp-verden
forklart
i-retten
popradet
det-store-bildet
fotballpodden-2
nokon-ma-ga
dine-penger-pengeradet
rss-gukild-johaug
bt-dokumentar-2
aftenbla-bla
hanna-de-heldige
rss-ness
frokostshowet-pa-p5
rss-penger-polser-og-politikk
rss-dannet-uten-piano
e24-podden