Trusting Machine Learning Models with LIME
Data Skeptic19 Elo 2016

Trusting Machine Learning Models with LIME

Machine learning models are often criticized for being black boxes. If a human cannot determine why the model arrives at the decision it made, there's good cause for skepticism. Classic inspection approaches to model interpretability are only useful for simple models, which are likely to only cover simple problems.

The LIME project seeks to help us trust machine learning models. At a high level, it takes advantage of local fidelity. For a given example, a separate model trained on neighbors of the example are likely to reveal the relevant features in the local input space to reveal details about why the model arrives at it's conclusion.

In this episode, Marco Tulio Ribeiro joins us to discuss how LIME (Locally Interpretable Model-Agnostic Explanations) can help users trust machine learning models. The accompanying paper is titled "Why Should I Trust You?": Explaining the Predictions of Any Classifier.

Tämä jakso on lisätty Podme-palveluun avoimen RSS-syötteen kautta eikä se ole Podmen omaa tuotantoa. Siksi jakso saattaa sisältää mainontaa.

Suosittua kategoriassa Tiede

rss-mita-tulisi-tietaa
rss-poliisin-mieli
tiedekulma-podcast
menologeja-tutkimusmatka-vaihdevuosiin
sotataidon-ytimessa
filocast-filosofian-perusteet
rss-duodecim-lehti
rss-astetta-parempi-elama-podcast
rss-lapsuuden-rakentajat-podcast
utelias-mieli
docemilia
radio-antro
rss-ranskaa-raakana
rss-kasvatuspsykologiaa-kaikille
rss-tiedetta-vai-tarinaa
rss-luontopodi-samuel-glassar-tutkii-luonnon-ihmeita
rss-sosiopodi