Trusting Machine Learning Models with LIME
Data Skeptic19 Aug 2016

Trusting Machine Learning Models with LIME

Machine learning models are often criticized for being black boxes. If a human cannot determine why the model arrives at the decision it made, there's good cause for skepticism. Classic inspection approaches to model interpretability are only useful for simple models, which are likely to only cover simple problems.

The LIME project seeks to help us trust machine learning models. At a high level, it takes advantage of local fidelity. For a given example, a separate model trained on neighbors of the example are likely to reveal the relevant features in the local input space to reveal details about why the model arrives at it's conclusion.

In this episode, Marco Tulio Ribeiro joins us to discuss how LIME (Locally Interpretable Model-Agnostic Explanations) can help users trust machine learning models. The accompanying paper is titled "Why Should I Trust You?": Explaining the Predictions of Any Classifier.

Det här avsnittet är hämtat från ett öppet RSS-flöde och publiceras inte av Podme. Det kan innehålla reklam.

Avsnitt(601)

Populärt inom Vetenskap

allt-du-velat-veta
dumma-manniskor
p3-dystopia
rss-ufobortom-rimligt-tvivel
sexet
rss-vetenskapsradion
medicinvetarna
ufo-sverige
rss-vetenskapsradion-2
svd-nyhetsartiklar
hacka-livet
det-morka-psyket
kapitalet-en-podd-om-ekonomi
halsorevolutionen
paranormalt-med-caroline-giertz
ufo-sverige-2
rss-klotet
ideer-som-forandrar-varlden
pojkmottagningen
bildningspodden