Angie Hugeback - Generating Training Data for Your ML Models - TWiML Talk #6

Angie Hugeback - Generating Training Data for Your ML Models - TWiML Talk #6

My guest this time is Angie Hugeback, who is principal data scientist at Spare5. Spare5 helps customers generate the high-quality labeled training datasets that are so crucial to developing accurate machine learning models. In this show, Angie and I cover a ton of the real-world practicalities of generating training datasets. We talk through the challenges faced by folks that need to label training data, and how to develop a cohesive system for achieving performing the various labeling tasks you’re likely to encounter. We discuss some of the ways that bias can creep into your training data and how to avoid that. And we explore the some of the popular 3rd party options that companies look at for scaling training data production, and how they differ. Spare5 has graciously sponsored this episode; you can learn more about them at spare5.com. The notes for this show can be found at twimlai.com/talk/6.

Jaksot(764)

Scaling Multi-Modal Generative AI with Luke Zettlemoyer - #650

Scaling Multi-Modal Generative AI with Luke Zettlemoyer - #650

Today we’re joined by Luke Zettlemoyer, professor at University of Washington and a research manager at Meta. In our conversation with Luke, we cover multimodal generative AI, the effect of data on models, and the significance of open source and open science. We explore the grounding problem, the need for visual grounding and embodiment in text-based models, the advantages of discretization tokenization in image generation, and his paper Scaling Laws for Generative Mixed-Modal Language Models, which focuses on simultaneously training LLMs on various modalities. Additionally, we cover his papers on Self-Alignment with Instruction Backtranslation, and LIMA: Less Is More for Alignment. The complete show notes for this episode can be found at twimlai.com/go/650.

9 Loka 202338min

Pushing Back on AI Hype with Alex Hanna - #649

Pushing Back on AI Hype with Alex Hanna - #649

Today we’re joined by Alex Hanna, the Director of Research at the Distributed AI Research Institute (DAIR). In our conversation with Alex, we discuss the topic of AI hype and the importance of tackling the issues and impacts it has on society. Alex highlights how the hype cycle started, concerning use cases, incentives driving people towards the rapid commercialization of AI tools, and the need for robust evaluation tools and frameworks to assess and mitigate the risks of these technologies. We also talked about DAIR and how they’ve crafted their research agenda. We discuss current research projects like DAIR Fellow Asmelash Teka Hadgu’s research supporting machine translation and speech recognition tools for the low-resource Amharic and Tigrinya languages of Ethiopia and Eritrea, in partnership with his startup Lesan.AI. We also explore the “Do Data Sets Have Politics” paper, which focuses on coding various variables and conducting a qualitative analysis of computer vision data sets to uncover the inherent politics present in data sets and the challenges in data set creation. The complete show notes for this episode can be found at twimlai.com/go/649.

2 Loka 202349min

Personalization for Text-to-Image Generative AI with Nataniel Ruiz - #648

Personalization for Text-to-Image Generative AI with Nataniel Ruiz - #648

Today we’re joined by Nataniel Ruiz, a research scientist at Google. In our conversation with Nataniel, we discuss his recent work around personalization for text-to-image AI models. Specifically, we dig into DreamBooth, an algorithm that enables “subject-driven generation,” that is, the creation of personalized generative models using a small set of user-provided images about a subject. The personalized models can then be used to generate the subject in various contexts using a text prompt. Nataniel gives us a dive deep into the fine-tuning approach used in DreamBooth, the potential reasons behind the algorithm’s effectiveness, the challenges of fine-tuning diffusion models in this way, such as language drift, and how the prior preservation loss technique avoids this setback, as well as the evaluation challenges and metrics used in DreamBooth. We also touched base on his other recent papers including SuTI, StyleDrop, HyperDreamBooth, and lastly, Platypus. The complete show notes for this episode can be found at twimlai.com/go/648.

25 Syys 202344min

Ensuring LLM Safety for Production Applications with Shreya Rajpal - #647

Ensuring LLM Safety for Production Applications with Shreya Rajpal - #647

Today we’re joined by Shreya Rajpal, founder and CEO of Guardrails AI. In our conversation with Shreya, we discuss ensuring the safety and reliability of language models for production applications. We explore the risks and challenges associated with these models, including different types of hallucinations and other LLM failure modes. We also talk about the susceptibility of the popular retrieval augmented generation (RAG) technique to closed-domain hallucination, and how this challenge can be addressed. We also cover the need for robust evaluation metrics and tooling for building with large language models. Lastly, we explore Guardrails, an open-source project that provides a catalog of validators that run on top of language models to enforce correctness and reliability efficiently. The complete show notes for this episode can be found at twimlai.com/go/647.

18 Syys 202340min

What’s Next in LLM Reasoning? with Roland Memisevic - #646

What’s Next in LLM Reasoning? with Roland Memisevic - #646

Today we’re joined by Roland Memisevic, a senior director at Qualcomm AI Research. In our conversation with Roland, we discuss the significance of language in humanlike AI systems and the advantages and limitations of autoregressive models like Transformers in building them. We cover the current and future role of recurrence in LLM reasoning and the significance of improving grounding in AI—including the potential of developing a sense of self in agents. Along the way, we discuss Fitness Ally, a fitness coach trained on a visually grounded large language model, which has served as a platform for Roland’s research into neural reasoning, as well as recent research that explores topics like visual grounding for large language models and state-augmented architectures for AI agents. The complete show notes for this episode can be found at twimlai.com/go/646.

11 Syys 202359min

Is ChatGPT Getting Worse? with James Zou - #645

Is ChatGPT Getting Worse? with James Zou - #645

Today we’re joined by James Zou, an assistant professor at Stanford University. In our conversation with James, we explore the differences in ChatGPT’s behavior over the last few months. We discuss the issues that can arise from inconsistencies in generative AI models, how he tested ChatGPT’s performance in various tasks, drawing comparisons between March 2023 and June 2023 for both GPT-3.5 and GPT-4 versions, and the possible reasons behind the declining performance of these models. James also shared his thoughts on how surgical AI editing akin to CRISPR could potentially revolutionize LLM and AI systems, and how adding monitoring tools can help in tracking behavioral changes in these models. Finally, we discuss James' recent paper on pathology image analysis using Twitter data, in which he explores the challenges of obtaining large medical datasets and data collection, as well as detailing the model’s architecture, training, and the evaluation process. The complete show notes for this episode can be found at twimlai.com/go/645.

4 Syys 202342min

Why Deep Networks and Brains Learn Similar Features with Sophia Sanborn - #644

Why Deep Networks and Brains Learn Similar Features with Sophia Sanborn - #644

Today we’re joined by Sophia Sanborn, a postdoctoral scholar at the University of California, Santa Barbara. In our conversation with Sophia, we explore the concept of universality between neural representations and deep neural networks, and how these principles of efficiency provide an ability to find consistent features across networks and tasks. We also discuss her recent paper on Bispectral Neural Networks which focuses on Fourier transform and its relation to group theory, the implementation of bi-spectral spectrum in achieving invariance in deep neural networks, the expansion of geometric deep learning on the concept of CNNs from other domains, the similarities in the fundamental structure of artificial neural networks and biological neural networks and how applying similar constraints leads to the convergence of their solutions. The complete show notes for this episode can be found at twimlai.com/go/644.

28 Elo 202345min

Inverse Reinforcement Learning Without RL with Gokul Swamy - #643

Inverse Reinforcement Learning Without RL with Gokul Swamy - #643

Today we’re joined by Gokul Swamy, a Ph.D. Student at the Robotics Institute at Carnegie Mellon University. In the final conversation of our ICML 2023 series, we sat down with Gokul to discuss his accepted papers at the event, leading off with “Inverse Reinforcement Learning without Reinforcement Learning.” In this paper, Gokul explores the challenges and benefits of inverse reinforcement learning, and the potential and advantages it holds for various applications. Next up, we explore the “Complementing a Policy with a Different Observation Space” paper which applies causal inference techniques to accurately estimate sampling balance and make decisions based on limited observed features. Finally, we touched on “Learning Shared Safety Constraints from Multi-task Demonstrations” which centers on learning safety constraints from demonstrations using the inverse reinforcement learning approach. The complete show notes for this episode can be found at twimlai.com/go/643.

21 Elo 202333min

Suosittua kategoriassa Politiikka ja uutiset

rss-ootsa-kuullut-tasta
ootsa-kuullut-tasta-2
aikalisa
rss-podme-livebox
politiikan-puskaradio
rss-vaalirankkurit-podcast
et-sa-noin-voi-sanoo-esittaa
otetaan-yhdet
rikosmyytit
the-ulkopolitist
linda-maria
rss-hyvaa-huomenta-bryssel
rss-sinivalkoinen-islam
rss-kaikki-uusiksi
rss-raha-talous-ja-politiikka
rss-mina-ukkola
rss-pallo-keskelle-2
rss-polikulaari-humanisti-vastaa-ja-muut-ts-podcastit
rss-merja-mahkan-rahat
rss-tasta-on-kyse-ivan-puopolo-verkkouutiset