High-Dimensional Robust Statistics with Ilias Diakonikolas - #351

High-Dimensional Robust Statistics with Ilias Diakonikolas - #351

Today we’re joined by Ilias Diakonikolas, faculty in the CS department at the University of Wisconsin-Madison, and author of the paper Distribution-Independent PAC Learning of Halfspaces with Massart Noise, recipient of the NeurIPS 2019 Outstanding Paper award. The paper is regarded as the first progress made around distribution-independent learning with noise since the 80s. In our conversation, we explore robustness in ML, problems with corrupt data in high-dimensional settings, and of course, the paper.

Avsnitt(775)

ML Lifecycle Management at Algorithmia with Diego Oppenheimer - #470

ML Lifecycle Management at Algorithmia with Diego Oppenheimer - #470

In this episode, we’re joined by Diego Oppenheimer, Founder and CEO of Algorithmia. In our conversation, we discuss Algorithmia’s involvement with TWIMLcon, as well as an exploration of the results of their recently conducted survey on the state of the AI market. The complete show notes for this episode can be found at twimlai.com/go/470.

1 Apr 202126min

End to End ML at Cloudera with Santiago Giraldo - #469 [TWIMLcon Sponsor Series]

End to End ML at Cloudera with Santiago Giraldo - #469 [TWIMLcon Sponsor Series]

In this episode, we’re joined by Santiago Giraldo, Director Of Product Marketing for Data Engineering & Machine Learning at Cloudera. In our conversation, we discuss Cloudera’s talks at TWIMLcon, as well as their various research efforts from their Fast Forward Labs arm. The complete show notes for this episode can be found at twimlai.com/sponsorseries.

29 Mars 202122min

ML Platforms for Global Scale at Prosus with Paul van der Boor - #468 [TWIMLcon Sponsor Series]

ML Platforms for Global Scale at Prosus with Paul van der Boor - #468 [TWIMLcon Sponsor Series]

In this episode, we’re joined by Paul van der Boor, Senior Director of Data Science at Prosus, to discuss his TWIMLcon experience and how they’re using ML platforms to manage machine learning at a global scale. The complete show notes for this episode can be found at twimlai.com/sponsorseries.

29 Mars 202122min

Can Language Models Be Too Big? 🦜 with Emily Bender and Margaret Mitchell - #467

Can Language Models Be Too Big? 🦜 with Emily Bender and Margaret Mitchell - #467

Today we’re joined by Emily M. Bender, Professor at the University of Washington, and AI Researcher, Margaret Mitchell.  Emily and Meg, as well as Timnit Gebru and Angelina McMillan-Major, are co-authors on the paper On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. As most of you undoubtedly know by now, there has been much controversy surrounding, and fallout from, this paper. In this conversation, our main priority was to focus on the message of the paper itself. We spend some time discussing the historical context for the paper, then turn to the goals of the paper, discussing the many reasons why the ever-growing datasets and models are not necessarily the direction we should be going.  We explore the cost of these training datasets, both literal and environmental, as well as the bias implications of these models, and of course the perpetual debate about responsibility when building and deploying ML systems. Finally, we discuss the thin line between AI hype and useful AI systems, and the importance of doing pre-mortems to truly flesh out any issues you could potentially come across prior to building models, and much much more.  The complete show notes for this episode can be found at twimlai.com/go/467.

24 Mars 202154min

Applying RL to Real-World Robotics with Abhishek Gupta - #466

Applying RL to Real-World Robotics with Abhishek Gupta - #466

Today we’re joined by Abhishek Gupta, a PhD Student at UC Berkeley.  Abhishek, a member of the BAIR Lab, joined us to talk about his recent robotics and reinforcement learning research and interests, which focus on applying RL to real-world robotics applications. We explore the concept of reward supervision, and how to get robots to learn these reward functions from videos, and the rationale behind supervised experts in these experiments.  We also discuss the use of simulation for experiments, data collection, and the path to scalable robotic learning. Finally, we discuss gradient surgery vs gradient sledgehammering, and his ecological RL paper, which focuses on the “phenomena that exist in the real world” and how humans and robotics systems interface in those situations.  The complete show notes for this episode can be found at https://twimlai.com/go/466.

22 Mars 202136min

Accelerating Innovation with AI at Scale with David Carmona - #465

Accelerating Innovation with AI at Scale with David Carmona - #465

Today we’re joined by David Carmona, General Manager of Artificial Intelligence & Innovation at Microsoft.  In our conversation with David, we focus on his work on AI at Scale, an initiative focused on the change in the ways people are developing AI, driven in large part by the emergence of massive models. We explore David’s thoughts about the progression towards larger models, the focus on parameters and how it ties to the architecture of these models, and how we should assess how attention works in these models. We also discuss the different families of models (generation & representation), the transition from CV to NLP tasks, and an interesting point of models “becoming a platform” via transfer learning. The complete show notes for this episode can be found at twimlai.com/go/465.

18 Mars 202148min

Complexity and Intelligence with Melanie Mitchell - #464

Complexity and Intelligence with Melanie Mitchell - #464

Today we’re joined by Melanie Mitchell, Davis Professor at the Santa Fe Institute and author of Artificial Intelligence: A Guide for Thinking Humans.  While Melanie has had a long career with a myriad of research interests, we focus on a few, complex systems and the understanding of intelligence, complexity, and her recent work on getting AI systems to make analogies. We explore examples of social learning, and how it applies to AI contextually, and defining intelligence.  We discuss potential frameworks that would help machines understand analogies, established benchmarks for analogy, and if there is a social learning solution to help machines figure out analogy. Finally we talk through the overall state of AI systems, the progress we’ve made amid the limited concept of social learning, if we’re able to achieve intelligence with current approaches to AI, and much more! The complete show notes for this episode can be found at twimlai.com/go/464.

15 Mars 202132min

Robust Visual Reasoning with Adriana Kovashka - #463

Robust Visual Reasoning with Adriana Kovashka - #463

Today we’re joined by Adriana Kovashka, an Assistant Professor at the University of Pittsburgh. In our conversation with Adriana, we explore her visual commonsense research, and how it intersects with her background in media studies. We discuss the idea of shortcuts, or faults in visual question answering data sets that appear in many SOTA results, as well as the concept of masking, a technique developed to assist in context prediction. Adriana then describes how these techniques fit into her broader goal of trying to understand the rhetoric of visual advertisements.  Finally, Adriana shares a bit about her work on robust visual reasoning, the parallels between this research and other work happening around explainability, and the vision for her work going forward.  The complete show notes for this episode can be found at twimlai.com/go/463.

11 Mars 202141min

Populärt inom Politik & nyheter

aftonbladet-krim
svenska-fall
motiv
p3-krim
fordomspodden
rss-krimstad
blenda-2
rss-viva-fotboll
flashback-forever
aftonbladet-daily
rss-sanning-konsekvens
rss-vad-fan-hande
svd-nyhetsartiklar
dagens-eko
olyckan-inifran
krimmagasinet
rss-krimreportrarna
rss-frandfors-horna
rss-expressen-dok
rss-svalan-krim