Grokking, Generalization Collapse, and the Dynamics of Training Deep Neural Networks with Charles Martin - #734

Grokking, Generalization Collapse, and the Dynamics of Training Deep Neural Networks with Charles Martin - #734

Today, we're joined by Charles Martin, founder of Calculation Consulting, to discuss Weight Watcher, an open-source tool for analyzing and improving Deep Neural Networks (DNNs) based on principles from theoretical physics. We explore the foundations of the Heavy-Tailed Self-Regularization (HTSR) theory that underpins it, which combines random matrix theory and renormalization group ideas to uncover deep insights about model training dynamics. Charles walks us through WeightWatcher’s ability to detect three distinct learning phases—underfitting, grokking, and generalization collapse—and how its signature “layer quality” metric reveals whether individual layers are underfit, overfit, or optimally tuned. Additionally, we dig into the complexities involved in fine-tuning models, the surprising correlation between model optimality and hallucination, the often-underestimated challenges of search relevance, and their implications for RAG. Finally, Charles shares his insights into real-world applications of generative AI and his lessons learned from working in the field. The complete show notes for this episode can be found at https://twimlai.com/go/734.

Episoder(782)

Dynamic Token Merging for Efficient Byte-level Language Models with Julie Kallini - #724

Dynamic Token Merging for Efficient Byte-level Language Models with Julie Kallini - #724

Today, we're joined by Julie Kallini, PhD student at Stanford University to discuss her recent papers, “MrT5: Dynamic Token Merging for Efficient Byte-level Language Models” and “Mission: Impossible L...

24 Mar 202550min

Scaling Up Test-Time Compute with Latent Reasoning with Jonas Geiping - #723

Scaling Up Test-Time Compute with Latent Reasoning with Jonas Geiping - #723

Today, we're joined by Jonas Geiping, research group leader at Ellis Institute and the Max Planck Institute for Intelligent Systems to discuss his recent paper, “Scaling up Test-Time Compute with Late...

17 Mar 202558min

Imagine while Reasoning in Space: Multimodal Visualization-of-Thought with Chengzu Li - #722

Imagine while Reasoning in Space: Multimodal Visualization-of-Thought with Chengzu Li - #722

Today, we're joined by Chengzu Li, PhD student at the University of Cambridge to discuss his recent paper, “Imagine while Reasoning in Space: Multimodal Visualization-of-Thought.” We explore the motiv...

10 Mar 202542min

Inside s1: An o1-Style Reasoning Model That Cost Under $50 to Train with Niklas Muennighoff - #721

Inside s1: An o1-Style Reasoning Model That Cost Under $50 to Train with Niklas Muennighoff - #721

Today, we're joined by Niklas Muennighoff, a PhD student at Stanford University, to discuss his paper, “S1: Simple Test-Time Scaling.” We explore the motivations behind S1, as well as how it compares ...

3 Mar 202549min

Accelerating AI Training and Inference with AWS Trainium2 with Ron Diamant - #720

Accelerating AI Training and Inference with AWS Trainium2 with Ron Diamant - #720

Today, we're joined by Ron Diamant, chief architect for Trainium at Amazon Web Services, to discuss hardware acceleration for generative AI and the design and role of the recently released Trainium2 c...

24 Feb 20251h 7min

π0: A Foundation Model for Robotics with Sergey Levine - #719

π0: A Foundation Model for Robotics with Sergey Levine - #719

Today, we're joined by Sergey Levine, associate professor at UC Berkeley and co-founder of Physical Intelligence, to discuss π0 (pi-zero), a general-purpose robotic foundation model. We dig into the m...

18 Feb 202552min

AI Trends 2025: AI Agents and Multi-Agent Systems with Victor Dibia - #718

AI Trends 2025: AI Agents and Multi-Agent Systems with Victor Dibia - #718

Today we’re joined by Victor Dibia, principal research software engineer at Microsoft Research, to explore the key trends and advancements in AI agents and multi-agent systems shaping 2025 and beyond....

10 Feb 20251h 44min

Speculative Decoding and Efficient LLM Inference with Chris Lott - #717

Speculative Decoding and Efficient LLM Inference with Chris Lott - #717

Today, we're joined by Chris Lott, senior director of engineering at Qualcomm AI Research to discuss accelerating large language model inference. We explore the challenges presented by the LLM encodin...

4 Feb 20251h 16min

Populært innen Politikk og nyheter

giver-og-gjengen-vg
aftenpodden
aftenpodden-usa
forklart
lydartikler-fra-aftenposten
popradet
fotballpodden-2
stopp-verden
dine-penger-pengeradet
det-store-bildet
rss-gukild-johaug
nokon-ma-ga
hanna-de-heldige
rss-ness
e24-podden
aftenbla-bla
i-retten
frokostshowet-pa-p5
rss-dannet-uten-piano
grasoner-den-nye-kalde-krigen