High-Dimensional Robust Statistics with Ilias Diakonikolas - #351

High-Dimensional Robust Statistics with Ilias Diakonikolas - #351

Today we’re joined by Ilias Diakonikolas, faculty in the CS department at the University of Wisconsin-Madison, and author of the paper Distribution-Independent PAC Learning of Halfspaces with Massart Noise, recipient of the NeurIPS 2019 Outstanding Paper award. The paper is regarded as the first progress made around distribution-independent learning with noise since the 80s. In our conversation, we explore robustness in ML, problems with corrupt data in high-dimensional settings, and of course, the paper.

Episoder(775)

Evolving AI Systems Gracefully with Stefano Soatto - #502

Evolving AI Systems Gracefully with Stefano Soatto - #502

Today we’re joined by Stefano Soatto, VP of AI applications science at AWS and a professor of computer science at UCLA.  Our conversation with Stefano centers on recent research of his called Graceful AI, which focuses on how to make trained systems evolve gracefully. We discuss the broader motivation for this research and the potential dangers or negative effects of constantly retraining ML models in production. We also talk about research into error rate clustering, the importance of model architecture when dealing with problems of model compression, how they’ve solved problems of regression and reprocessing by utilizing existing models, and much more. The complete show notes for this episode can be found at twimlai.com/go/502.

19 Jul 202149min

ML Innovation in Healthcare with Suchi Saria - #501

ML Innovation in Healthcare with Suchi Saria - #501

Today we’re joined by Suchi Saria, the founder and CEO of Bayesian Health, the John C. Malone associate professor of computer science, statistics, and health policy, and the director of the machine learning and healthcare lab at Johns Hopkins University.  Suchi shares a bit about her journey to working in the intersection of machine learning and healthcare, and how her research has spanned across both medical policy and discovery. We discuss why it has taken so long for machine learning to become accepted and adopted by the healthcare infrastructure and where exactly we stand in the adoption process, where there have been “pockets” of tangible success.  Finally, we explore the state of healthcare data, and of course, we talk about Suchi’s recently announced startup Bayesian Health and their goals in the healthcare space, and an accompanying study that looks at real-time ML inference in an EMR setting. The complete show notes for this episode can be found at twimlai.com/go/501.

15 Jul 202145min

Cross-Device AI Acceleration, Compilation & Execution with Jeff Gehlhaar - #500

Cross-Device AI Acceleration, Compilation & Execution with Jeff Gehlhaar - #500

Today we’re joined by a friend of the show Jeff Gehlhaar, VP of technology and the head of AI software platforms at Qualcomm.  In our conversation with Jeff, we cover a ton of ground, starting with a bit of exploration around ML compilers, what they are, and their role in solving issues of parallelism. We also dig into the latest additions to the Snapdragon platform, AI Engine Direct, and how it works as a bridge to bring more capabilities across their platform, how benchmarking works in the context of the platform, how the work of other researchers we’ve spoken to on compression and quantization finds its way from research to product, and much more!  After you check out this interview, you can look below for some of the other conversations with researchers mentioned.  The complete show notes for this episode can be found at twimlai.com/go/500.

12 Jul 202141min

The Future of Human-Machine Interaction with Dan Bohus and Siddhartha Sen - #499

The Future of Human-Machine Interaction with Dan Bohus and Siddhartha Sen - #499

Today we continue our AI in Innovation series joined by Dan Bohus, senior principal researcher at Microsoft Research, and Siddhartha Sen, a principal researcher at Microsoft Research.  In this conversation, we use a pair of research projects, Maia Chess and Situated Interaction, to springboard us into a conversation about the evolution of human-AI interaction. We discuss both of these projects individually, as well as the commonalities they have, how themes like understanding the human experience appear in their work, the types of models being used, the various types of data, and the complexity of each of their setups.  We explore some of the challenges associated with getting computers to better understand human behavior and interact in ways that are more fluid. Finally, we touch on what excites both Dan and Sid about their respective projects, and what they’re excited about for the future.   The complete show notes for this episode can be found at https://twimlai.com/go/499.

8 Jul 202148min

Vector Quantization for NN Compression with Julieta Martinez - #498

Vector Quantization for NN Compression with Julieta Martinez - #498

Today we’re joined by Julieta Martinez, a senior research scientist at recently announced startup Waabi.  Julieta was a keynote speaker at the recent LatinX in AI workshop at CVPR, and our conversation focuses on her talk “What do Large-Scale Visual Search and Neural Network Compression have in Common,” which shows that multiple ideas from large-scale visual search can be used to achieve state-of-the-art neural network compression. We explore the commonality between large databases and dealing with high dimensional, many-parameter neural networks, the advantages of using product quantization, and how that plays out when using it to compress a neural network.  We also dig into another paper Julieta presented at the conference, Deep Multi-Task Learning for Joint Localization, Perception, and Prediction, which details an architecture that is able to reuse computation between the three tasks, and is thus able to correct localization errors efficiently. The complete show notes for this episode can be found at twimlai.com/go/498.

5 Jul 202141min

Deep Unsupervised Learning for Climate Informatics with Claire Monteleoni - #497

Deep Unsupervised Learning for Climate Informatics with Claire Monteleoni - #497

Today we continue our CVPR 2021 coverage joined by Claire Monteleoni, an associate professor at the University of Colorado Boulder.  We cover quite a bit of ground in our conversation with Claire, including her journey down the path from environmental activist to one of the leading climate informatics researchers in the world. We explore her current research interests, and the available opportunities in applying machine learning to climate informatics, including the interesting position of doing ML from a data-rich environment.  Finally, we dig into the evolution of climate science-focused events and conferences, as well as the Keynote Claire gave at the EarthVision workshop at CVPR “Deep Unsupervised Learning for Climate Informatics,” which focused on semi- and unsupervised deep learning approaches to studying rare and extreme climate events. The complete show notes for this episode can be found at twimlai.com/go/497.

1 Jul 202142min

Skip-Convolutions for Efficient Video Processing with Amir Habibian - #496

Skip-Convolutions for Efficient Video Processing with Amir Habibian - #496

Today we kick off our CVPR coverage joined by Amir Habibian, a senior staff engineer manager at Qualcomm Technologies.  In our conversation with Amir, whose research primarily focuses on video perception, we discuss a few papers they presented at the event. We explore the paper Skip-Convolutions for Efficient Video Processing, which looks at training discrete variables to end to end into visual neural networks. We also discuss his work on his FrameExit paper, which proposes a conditional early exiting framework for efficient video recognition.  The complete show notes for this episode can be found at twimlai.com/go/496.

28 Jun 202147min

Advancing NLP with Project Debater w/ Noam Slonim - #495

Advancing NLP with Project Debater w/ Noam Slonim - #495

Today we’re joined by Noam Slonim, the principal investigator of Project Debater at IBM Research.  In our conversation with Noam, we explore the history of Project Debater, the first AI system that can “debate” humans on complex topics. We also dig into the evolution of the project, which is the culmination of 7 years and over 50 research papers, and eventually becoming a Nature cover paper, “An Autonomous Debating System,” which details the system in its entirety.  Finally, Noam details many of the underlying capabilities of Debater, including the relationship between systems preparation and training, evidence detection, detecting the quality of arguments, narrative generation, the use of conventional NLP methods like entity linking, and much more. The complete show notes for this episode can be found at twimlai.com/go/495.

24 Jun 202151min

Populært innen Politikk og nyheter

giver-og-gjengen-vg
aftenpodden
forklart
aftenpodden-usa
popradet
stopp-verden
nokon-ma-ga
fotballpodden-2
det-store-bildet
dine-penger-pengeradet
frokostshowet-pa-p5
rss-ness
rss-gukild-johaug
rss-dannet-uten-piano
aftenbla-bla
e24-podden
rss-penger-polser-og-politikk
unitedno
rss-gilbrantsuvatne
lydartikler-fra-aftenposten