Advancing NLP with Project Debater w/ Noam Slonim - #495

Advancing NLP with Project Debater w/ Noam Slonim - #495

Today we’re joined by Noam Slonim, the principal investigator of Project Debater at IBM Research. In our conversation with Noam, we explore the history of Project Debater, the first AI system that can “debate” humans on complex topics. We also dig into the evolution of the project, which is the culmination of 7 years and over 50 research papers, and eventually becoming a Nature cover paper, “An Autonomous Debating System,” which details the system in its entirety. Finally, Noam details many of the underlying capabilities of Debater, including the relationship between systems preparation and training, evidence detection, detecting the quality of arguments, narrative generation, the use of conventional NLP methods like entity linking, and much more. The complete show notes for this episode can be found at twimlai.com/go/495.

Avsnitt(775)

Chronos: Learning the Language of Time Series with Abdul Fatir Ansari - #685

Chronos: Learning the Language of Time Series with Abdul Fatir Ansari - #685

Today we're joined by Abdul Fatir Ansari, a machine learning scientist at AWS AI Labs in Berlin, to discuss his paper, "Chronos: Learning the Language of Time Series." Fatir explains the challenges of leveraging pre-trained language models for time series forecasting. We explore the advantages of Chronos over statistical models, as well as its promising results in zero-shot forecasting benchmarks. Finally, we address critiques of Chronos, the ongoing research to improve synthetic data quality, and the potential for integrating Chronos into production systems. The complete show notes for this episode can be found at twimlai.com/go/685.

20 Maj 202443min

Powering AI with the World's Largest Computer Chip with Joel Hestness - #684

Powering AI with the World's Largest Computer Chip with Joel Hestness - #684

Today we're joined by Joel Hestness, principal research scientist and lead of the core machine learning team at Cerebras. We discuss Cerebras’ custom silicon for machine learning, Wafer Scale Engine 3, and how the latest version of the company’s single-chip platform for ML has evolved to support large language models. Joel shares how WSE3 differs from other AI hardware solutions, such as GPUs, TPUs, and AWS’ Inferentia, and talks through the homogenous design of the WSE chip and its memory architecture. We discuss software support for the platform, including support by open source ML frameworks like Pytorch, and support for different types of transformer-based models. Finally, Joel shares some of the research his team is pursuing to take advantage of the hardware's unique characteristics, including weight-sparse training, optimizers that leverage higher-order statistics, and more. The complete show notes for this episode can be found at twimlai.com/go/684.

13 Maj 202455min

AI for Power & Energy with Laurent Boinot - #683

AI for Power & Energy with Laurent Boinot - #683

Today we're joined by Laurent Boinot, power and utilities lead for the Americas at Microsoft, to discuss the intersection of AI and energy infrastructure. We discuss the many challenges faced by current power systems in North America and the role AI is beginning to play in driving efficiencies in areas like demand forecasting and grid optimization. Laurent shares a variety of examples along the way, including some of the ways utility companies are using AI to ensure secure systems, interact with customers, navigate internal knowledge bases, and design electrical transmission systems. We also discuss the future of nuclear power, and why electric vehicles might play a critical role in American energy management. The complete show notes for this episode can be found at twimlai.com/go/683.

7 Maj 202449min

Controlling Fusion Reactor Instability with Deep Reinforcement Learning with Aza Jalalvand - #682

Controlling Fusion Reactor Instability with Deep Reinforcement Learning with Aza Jalalvand - #682

Today we're joined by Azarakhsh (Aza) Jalalvand, a research scholar at Princeton University, to discuss his work using deep reinforcement learning to control plasma instabilities in nuclear fusion reactors. Aza explains his team developed a model to detect and avoid a fatal plasma instability called ‘tearing mode’. Aza walks us through the process of collecting and pre-processing the complex diagnostic data from fusion experiments, training the models, and deploying the controller algorithm on the DIII-D fusion research reactor. He shares insights from developing the controller and discusses the future challenges and opportunities for AI in enabling stable and efficient fusion energy production. The complete show notes for this episode can be found at twimlai.com/go/682.

29 Apr 202442min

GraphRAG: Knowledge Graphs for AI Applications with Kirk Marple - #681

GraphRAG: Knowledge Graphs for AI Applications with Kirk Marple - #681

Today we're joined by Kirk Marple, CEO and founder of Graphlit, to explore the emerging paradigm of "GraphRAG," or Graph Retrieval Augmented Generation. In our conversation, Kirk digs into the GraphRAG architecture and how Graphlit uses it to offer a multi-stage workflow for ingesting, processing, retrieving, and generating content using LLMs (like GPT-4) and other Generative AI tech. He shares how the system performs entity extraction to build a knowledge graph and how graph, vector, and object storage are integrated in the system. We dive into how the system uses “prompt compilation” to improve the results it gets from Large Language Models during generation. We conclude by discussing several use cases the approach supports, as well as future agent-based applications it enables. The complete show notes for this episode can be found at twimlai.com/go/681.

22 Apr 202447min

Teaching Large Language Models to Reason with Reinforcement Learning with Alex Havrilla - #680

Teaching Large Language Models to Reason with Reinforcement Learning with Alex Havrilla - #680

Today we're joined by Alex Havrilla, a PhD student at Georgia Tech, to discuss "Teaching Large Language Models to Reason with Reinforcement Learning." Alex discusses the role of creativity and exploration in problem solving and explores the opportunities presented by applying reinforcement learning algorithms to the challenge of improving reasoning in large language models. Alex also shares his research on the effect of noise on language model training, highlighting the robustness of LLM architecture. Finally, we delve into the future of RL, and the potential of combining language models with traditional methods to achieve more robust AI reasoning. The complete show notes for this episode can be found at twimlai.com/go/680.

16 Apr 202446min

Localizing and Editing Knowledge in LLMs with Peter Hase - #679

Localizing and Editing Knowledge in LLMs with Peter Hase - #679

Today we're joined by Peter Hase, a fifth-year PhD student at the University of North Carolina NLP lab. We discuss "scalable oversight", and the importance of developing a deeper understanding of how large neural networks make decisions. We learn how matrices are probed by interpretability researchers, and explore the two schools of thought regarding how LLMs store knowledge. Finally, we discuss the importance of deleting sensitive information from model weights, and how "easy-to-hard generalization" could increase the risk of releasing open-source foundation models. The complete show notes for this episode can be found at twimlai.com/go/679.

8 Apr 202449min

Coercing LLMs to Do and Reveal (Almost) Anything with Jonas Geiping - #678

Coercing LLMs to Do and Reveal (Almost) Anything with Jonas Geiping - #678

Today we're joined by Jonas Geiping, a research group leader at the ELLIS Institute, to explore his paper: "Coercing LLMs to Do and Reveal (Almost) Anything". Jonas explains how neural networks can be exploited, highlighting the risk of deploying LLM agents that interact with the real world. We discuss the role of open models in enabling security research, the challenges of optimizing over certain constraints, and the ongoing difficulties in achieving robustness in neural networks. Finally, we delve into the future of AI security, and the need for a better approach to mitigate the risks posed by optimized adversarial attacks. The complete show notes for this episode can be found at twimlai.com/go/678.

1 Apr 202448min

Populärt inom Politik & nyheter

aftonbladet-krim
svenska-fall
motiv
p3-krim
fordomspodden
blenda-2
rss-viva-fotboll
rss-krimstad
flashback-forever
aftonbladet-daily
rss-sanning-konsekvens
rss-vad-fan-hande
rss-krimreportrarna
rss-frandfors-horna
dagens-eko
sydsvenskan-dok
olyckan-inifran
rss-flodet
rss-svalan-krim
krimmagasinet