Building the Internet of Agents with Vijoy Pandey - #737

Building the Internet of Agents with Vijoy Pandey - #737

Today, we're joined by Vijoy Pandey, SVP and general manager at Outshift by Cisco to discuss a foundational challenge for the enterprise: how do we make specialized agents from different vendors collaborate effectively? As companies like Salesforce, Workday, and Microsoft all develop their own agentic systems, integrating them creates a complex, probabilistic, and noisy environment, a stark contrast to the deterministic APIs of the past. Vijoy introduces Cisco's vision for an "Internet of Agents," a platform to manage this new reality, and its open-source implementation, AGNTCY. We explore the four phases of agent collaboration—discovery, composition, deployment, and evaluation—and dive deep into the communication stack, from syntactic protocols like A2A, ACP, and MCP to the deeper semantic challenges of creating a shared understanding between agents. Vijoy also unveils SLIM (Secure Low-Latency Interactive Messaging), a novel transport layer designed to make agent-to-agent communication quantum-safe, real-time, and efficient for multi-modal workloads. The complete show notes for this episode can be found at ⁠https://twimlai.com/go/737.

Avsnitt(776)

What’s Next in LLM Reasoning? with Roland Memisevic - #646

What’s Next in LLM Reasoning? with Roland Memisevic - #646

Today we’re joined by Roland Memisevic, a senior director at Qualcomm AI Research. In our conversation with Roland, we discuss the significance of language in humanlike AI systems and the advantages and limitations of autoregressive models like Transformers in building them. We cover the current and future role of recurrence in LLM reasoning and the significance of improving grounding in AI—including the potential of developing a sense of self in agents. Along the way, we discuss Fitness Ally, a fitness coach trained on a visually grounded large language model, which has served as a platform for Roland’s research into neural reasoning, as well as recent research that explores topics like visual grounding for large language models and state-augmented architectures for AI agents. The complete show notes for this episode can be found at twimlai.com/go/646.

11 Sep 202359min

Is ChatGPT Getting Worse? with James Zou - #645

Is ChatGPT Getting Worse? with James Zou - #645

Today we’re joined by James Zou, an assistant professor at Stanford University. In our conversation with James, we explore the differences in ChatGPT’s behavior over the last few months. We discuss the issues that can arise from inconsistencies in generative AI models, how he tested ChatGPT’s performance in various tasks, drawing comparisons between March 2023 and June 2023 for both GPT-3.5 and GPT-4 versions, and the possible reasons behind the declining performance of these models. James also shared his thoughts on how surgical AI editing akin to CRISPR could potentially revolutionize LLM and AI systems, and how adding monitoring tools can help in tracking behavioral changes in these models. Finally, we discuss James' recent paper on pathology image analysis using Twitter data, in which he explores the challenges of obtaining large medical datasets and data collection, as well as detailing the model’s architecture, training, and the evaluation process. The complete show notes for this episode can be found at twimlai.com/go/645.

4 Sep 202342min

Why Deep Networks and Brains Learn Similar Features with Sophia Sanborn - #644

Why Deep Networks and Brains Learn Similar Features with Sophia Sanborn - #644

Today we’re joined by Sophia Sanborn, a postdoctoral scholar at the University of California, Santa Barbara. In our conversation with Sophia, we explore the concept of universality between neural representations and deep neural networks, and how these principles of efficiency provide an ability to find consistent features across networks and tasks. We also discuss her recent paper on Bispectral Neural Networks which focuses on Fourier transform and its relation to group theory, the implementation of bi-spectral spectrum in achieving invariance in deep neural networks, the expansion of geometric deep learning on the concept of CNNs from other domains, the similarities in the fundamental structure of artificial neural networks and biological neural networks and how applying similar constraints leads to the convergence of their solutions. The complete show notes for this episode can be found at twimlai.com/go/644.

28 Aug 202345min

Inverse Reinforcement Learning Without RL with Gokul Swamy - #643

Inverse Reinforcement Learning Without RL with Gokul Swamy - #643

Today we’re joined by Gokul Swamy, a Ph.D. Student at the Robotics Institute at Carnegie Mellon University. In the final conversation of our ICML 2023 series, we sat down with Gokul to discuss his accepted papers at the event, leading off with “Inverse Reinforcement Learning without Reinforcement Learning.” In this paper, Gokul explores the challenges and benefits of inverse reinforcement learning, and the potential and advantages it holds for various applications. Next up, we explore the “Complementing a Policy with a Different Observation Space” paper which applies causal inference techniques to accurately estimate sampling balance and make decisions based on limited observed features. Finally, we touched on “Learning Shared Safety Constraints from Multi-task Demonstrations” which centers on learning safety constraints from demonstrations using the inverse reinforcement learning approach. The complete show notes for this episode can be found at twimlai.com/go/643.

21 Aug 202333min

Explainable AI for Biology and Medicine with Su-In Lee - #642

Explainable AI for Biology and Medicine with Su-In Lee - #642

Today we’re joined by Su-In Lee, a professor at the Paul G. Allen School of Computer Science And Engineering at the University Of Washington. In our conversation, Su-In details her talk from the ICML 2023 Workshop on Computational Biology which focuses on developing explainable AI techniques for the computational biology and clinical medicine fields. Su-In discussed the importance of explainable AI contributing to feature collaboration, the robustness of different explainability approaches, and the need for interdisciplinary collaboration between the computer science, biology, and medical fields. We also explore her recent paper on the use of drug combination therapy, challenges with handling biomedical data, and how they aim to make meaningful contributions to the healthcare industry by aiding in cause identification and treatments for Cancer and Alzheimer's diseases. The complete show notes for this episode can be found at twimlai.com/go/642.

14 Aug 202338min

Transformers On Large-Scale Graphs with Bayan Bruss - #641

Transformers On Large-Scale Graphs with Bayan Bruss - #641

Today we’re joined by Bayan Bruss, Vice President of Applied ML Research at Capital One. In our conversation with Bayan, we covered a pair of papers his team presented at this year’s ICML conference. We begin with the paper Interpretable Subspaces in Image Representations, where Bayan gives us a dive deep into the interpretability framework, embedding dimensions, contrastive approaches, and how their model can accelerate image representation in deep learning. We also explore GOAT: A Global Transformer on Large-scale Graphs, a scalable global graph transformer. We talk through the computation challenges, homophilic and heterophilic principles, model sparsity, and how their research proposes methodologies to get around the computational barrier when scaling to large-scale graph models. The complete show notes for this episode can be found at twimlai.com/go/641.

7 Aug 202338min

The Enterprise LLM Landscape with Atul Deo - #640

The Enterprise LLM Landscape with Atul Deo - #640

Today we’re joined by Atul Deo, General Manager of Amazon Bedrock. In our conversation with Atul, we discuss the process of training large language models in the enterprise, including the pain points of creating and training machine learning models, and the power of pre-trained models. We explore different approaches to how companies can leverage large language models, dealing with the hallucination, and the transformative process of retrieval augmented generation (RAG). Finally, Atul gives us an inside look at Bedrock, a fully managed service that simplifies the deployment of generative AI-based apps at scale. The complete show notes for this episode can be found at twimlai.com/go/640.

31 Juli 202337min

BloombergGPT - an LLM for Finance with David Rosenberg - #639

BloombergGPT - an LLM for Finance with David Rosenberg - #639

Today we’re joined by David Rosenberg, head of the machine learning strategy team in the Office of the CTO at Bloomberg. In our conversation with David, we discuss the creation of BloombergGPT, a custom-built LLM focused on financial applications. We explore the model’s architecture, validation process, benchmarks, and its distinction from other language models. David also discussed the evaluation process, performance comparisons, progress, and the future directions of the model. Finally, we discuss the ethical considerations that come with building these types of models, and how they've approached dealing with these issues. The complete show notes for this episode can be found at twimlai.com/go/639

24 Juli 202336min

Populärt inom Politik & nyheter

svenska-fall
motiv
aftonbladet-krim
p3-krim
fordomspodden
flashback-forever
rss-viva-fotboll
rss-krimstad
aftonbladet-daily
rss-sanning-konsekvens
spar
blenda-2
rss-vad-fan-hande
rss-krimreportrarna
rss-frandfors-horna
dagens-eko
olyckan-inifran
krimmagasinet
rss-expressen-dok
svd-nyhetsartiklar