AI Agents: Substance or Snake Oil with Arvind Narayanan - #704

AI Agents: Substance or Snake Oil with Arvind Narayanan - #704

Today, we're joined by Arvind Narayanan, professor of Computer Science at Princeton University to discuss his recent works, AI Agents That Matter and AI Snake Oil. In “AI Agents That Matter”, we explore the range of agentic behaviors, the challenges in benchmarking agents, and the ‘capability and reliability gap’, which creates risks when deploying AI agents in real-world applications. We also discuss the importance of verifiers as a technique for safeguarding agent behavior. We then dig into the AI Snake Oil book, which uncovers examples of problematic and overhyped claims in AI. Arvind shares various use cases of failed applications of AI, outlines a taxonomy of AI risks, and shares his insights on AI’s catastrophic risks. Additionally, we also touched on different approaches to LLM-based reasoning, his views on tech policy and regulation, and his work on CORE-Bench, a benchmark designed to measure AI agents' accuracy in computational reproducibility tasks. The complete show notes for this episode can be found at https://twimlai.com/go/704.

Jaksot(779)

GraphRAG: Knowledge Graphs for AI Applications with Kirk Marple - #681

GraphRAG: Knowledge Graphs for AI Applications with Kirk Marple - #681

Today we're joined by Kirk Marple, CEO and founder of Graphlit, to explore the emerging paradigm of "GraphRAG," or Graph Retrieval Augmented Generation. In our conversation, Kirk digs into the GraphRA...

22 Huhti 202447min

Teaching Large Language Models to Reason with Reinforcement Learning with Alex Havrilla - #680

Teaching Large Language Models to Reason with Reinforcement Learning with Alex Havrilla - #680

Today we're joined by Alex Havrilla, a PhD student at Georgia Tech, to discuss "Teaching Large Language Models to Reason with Reinforcement Learning." Alex discusses the role of creativity and explora...

16 Huhti 202446min

Localizing and Editing Knowledge in LLMs with Peter Hase - #679

Localizing and Editing Knowledge in LLMs with Peter Hase - #679

Today we're joined by Peter Hase, a fifth-year PhD student at the University of North Carolina NLP lab. We discuss "scalable oversight", and the importance of developing a deeper understanding of how ...

8 Huhti 202449min

Coercing LLMs to Do and Reveal (Almost) Anything with Jonas Geiping - #678

Coercing LLMs to Do and Reveal (Almost) Anything with Jonas Geiping - #678

Today we're joined by Jonas Geiping, a research group leader at the ELLIS Institute, to explore his paper: "Coercing LLMs to Do and Reveal (Almost) Anything". Jonas explains how neural networks can be...

1 Huhti 202448min

V-JEPA, AI Reasoning from a Non-Generative Architecture with Mido Assran - #677

V-JEPA, AI Reasoning from a Non-Generative Architecture with Mido Assran - #677

Today we’re joined by Mido Assran, a research scientist at Meta’s Fundamental AI Research (FAIR). In this conversation, we discuss V-JEPA, a new model being billed as “the next step in Yann LeCun's vi...

25 Maalis 202447min

Video as a Universal Interface for AI Reasoning with Sherry Yang - #676

Video as a Universal Interface for AI Reasoning with Sherry Yang - #676

Today we’re joined by Sherry Yang, senior research scientist at Google DeepMind and a PhD student at UC Berkeley. In this interview, we discuss her new paper, "Video as the New Language for Real-World...

18 Maalis 202449min

Assessing the Risks of Open AI Models with Sayash Kapoor - #675

Assessing the Risks of Open AI Models with Sayash Kapoor - #675

Today we’re joined by Sayash Kapoor, a Ph.D. student in the Department of Computer Science at Princeton University. Sayash walks us through his paper: "On the Societal Impact of Open Foundation Models...

11 Maalis 202440min

OLMo: Everything You Need to Train an Open Source LLM with Akshita Bhagia - #674

OLMo: Everything You Need to Train an Open Source LLM with Akshita Bhagia - #674

Today we’re joined by Akshita Bhagia, a senior research engineer at the Allen Institute for AI. Akshita joins us to discuss OLMo, a new open source language model with 7 billion and 1 billion variants...

4 Maalis 202432min

Suosittua kategoriassa Politiikka ja uutiset

aikalisa
rss-ootsa-kuullut-tasta
tervo-halme
ootsa-kuullut-tasta-2
politiikan-puskaradio
viisupodi
et-sa-noin-voi-sanoo-esittaa
otetaan-yhdet
rss-asiastudio
rss-vaalirankkurit-podcast
rss-podme-livebox
linda-maria
the-ulkopolitist
rss-kaikki-uusiksi
rss-tekkipodi
rikosmyytit
rss-mina-ukkola
rss-kuka-mina-olen
rss-raha-talous-ja-politiikka
rss-kyselytunti