Trusting Machine Learning Models with LIME
Data Skeptic19 Aug 2016

Trusting Machine Learning Models with LIME

Machine learning models are often criticized for being black boxes. If a human cannot determine why the model arrives at the decision it made, there's good cause for skepticism. Classic inspection approaches to model interpretability are only useful for simple models, which are likely to only cover simple problems.

The LIME project seeks to help us trust machine learning models. At a high level, it takes advantage of local fidelity. For a given example, a separate model trained on neighbors of the example are likely to reveal the relevant features in the local input space to reveal details about why the model arrives at it's conclusion.

In this episode, Marco Tulio Ribeiro joins us to discuss how LIME (Locally Interpretable Model-Agnostic Explanations) can help users trust machine learning models. The accompanying paper is titled "Why Should I Trust You?": Explaining the Predictions of Any Classifier.

Denne episoden er hentet fra en åpen RSS-feed og er ikke publisert av Podme. Den kan derfor inneholde annonser.

Episoder(601)

Animal Decision Making

Animal Decision Making

On today's episode, we are joined by Aimee Dunlap. Aimee is an assistant professor at the University of Missouri–St. Louis and the interim director at the Whitney R. Harris World Ecology Center. Aimee...

12 Mar 202437min

Octopus Cognition

Octopus Cognition

We are joined by Tamar Gutnick, a visiting professor at the University of Naples Federico II, Napoli, Italy. She studies the octopus nervous system and their behavior, focusing on cognition and learni...

8 Mar 202438min

Optimal Foraging

Optimal Foraging

Claire Hemmingway, an assistant professor in the Department of Psychology and Ecology and Evolutionary Biology at the University of Tennessee in Knoxville, is our guest today. Her research is on decis...

28 Feb 202438min

Memory in Chess

Memory in Chess

On today's show, we are joined by our co-host, Becky Hansis-O'Neil. Becky is a Ph.D. student at the University of Missouri, St Louis, where she studies bumblebees and tarantulas to understand their le...

12 Feb 202448min

OpenWorm

OpenWorm

On this episode, we are joined by Stephen Larson, the CEO of MetaCell and an affiliate of the OpenWorm foundation. Stephen discussed what the Openworm project is about. They hope to use a digital C. e...

5 Feb 202434min

What the Antlion Knows

What the Antlion Knows

Our guest is Becky Hansis-O'Neil, a Ph.D. student at the University of Missouri, St Louis, and our co-host for the new "Animal Intelligence" season. Becky shares her background on how she got into the...

30 Jan 202441min

AI Roundtable

AI Roundtable

Kyle is joined by friends and former guests Pramit Choudhary and Frank Bell to have an open discussion of the impacts LLMs and machine learning have had in the past year on industry, and where things ...

17 Jan 202450min

Uncontrollable AI Risks

Uncontrollable AI Risks

We are joined by Darren McKee, a Policy Advisor and the host of Reality Check — a critical thinking podcast. Darren gave a background about himself and how he got into the AI space. Darren shared his ...

27 Des 202338min

Populært innen Vitenskap

fastlegen
tingenes-tilstand
jss
forskningno
rekommandert
rss-zahid-ali-hjelper-deg
rss-paradigmepodden
sinnsyn
vett-og-vitenskap-med-gaute-einevoll
rss-overskuddsliv
nordnorsk-historie
kvinnehelsepodden
tidlose-historier
villmarksliv
liberal-halvtime
rss-inn-til-kjernen-med-sunniva-rose
fjellsportpodden
grunnstoffene
nevropodden
rss-rekommandert