Trusting Machine Learning Models with LIME
Data Skeptic19 Elo 2016

Trusting Machine Learning Models with LIME

Machine learning models are often criticized for being black boxes. If a human cannot determine why the model arrives at the decision it made, there's good cause for skepticism. Classic inspection approaches to model interpretability are only useful for simple models, which are likely to only cover simple problems.

The LIME project seeks to help us trust machine learning models. At a high level, it takes advantage of local fidelity. For a given example, a separate model trained on neighbors of the example are likely to reveal the relevant features in the local input space to reveal details about why the model arrives at it's conclusion.

In this episode, Marco Tulio Ribeiro joins us to discuss how LIME (Locally Interpretable Model-Agnostic Explanations) can help users trust machine learning models. The accompanying paper is titled "Why Should I Trust You?": Explaining the Predictions of Any Classifier.

Tämä jakso on lisätty Podme-palveluun avoimen RSS-syötteen kautta eikä se ole Podmen omaa tuotantoa. Siksi jakso saattaa sisältää mainontaa.

Jaksot(601)

Student Spotlight: Aaron Payne, Data Analyst

Student Spotlight: Aaron Payne, Data Analyst

Aaron Payne, an MBA student at Georgia Tech studying business analytics and a Senior Insights Analyst at Chick-fil-A, joins Kyle Polich to talk about turning analytics into decisions that matter. They...

1 Touko 25min

The Future is Agentic in Recommender Systems

The Future is Agentic in Recommender Systems

Kyle Polich sits down with Yashar Deldjoo, research scientist and Associate Professor at the Polytechnic University of Bari, to explore how recommender systems have evolved and why trustworthiness mat...

25 Huhti 49min

Book Ratings and Recommendations

Book Ratings and Recommendations

Goodreads star ratings can be misleading as measures of "book quality," and research from Hannes Rosenbusch suggests that for many professionally published books, differences between readers often mat...

27 Maalis 39min

Disentanglement and Interpretability in Recommender Systems

Disentanglement and Interpretability in Recommender Systems

Ervin Dervishaj, a PhD student at the University of Copenhagen, discusses his research on disentangled representation learning in recommender systems, finding that while disentanglement strongly corre...

10 Maalis 30min

Collective Altruism in Recommender Systems

Collective Altruism in Recommender Systems

Ekaterina (Kat) Fedorova from MIT EECS joins us to discuss strategic learning in recommender systems—what happens when users collectively coordinate to game recommendation algorithms. Kat's research r...

27 Helmi 54min

Niche vs Mainstream

Niche vs Mainstream

Anas Buhayh discusses multi-stakeholder fairness in recommender systems and the S'mores framework—a simulation allowing users to choose between mainstream and niche algorithms. His research shows spec...

18 Helmi 34min

Healthy Friction in Job Recommender Systems

Healthy Friction in Job Recommender Systems

In this episode, host Kyle Polich speaks with Roan Schellingerhout, a fourth-year PhD student at Maastricht University, about explainable multi-stakeholder recommender systems for job recruitment. Roa...

2 Helmi 26min

Fairness in PCA-Based Recommenders

Fairness in PCA-Based Recommenders

In this episode, we explore the fascinating world of recommender systems and algorithmic fairness with David Liu, Assistant Research Professor at Cornell University's Center for Data Science for Enter...

26 Tammi 49min

Suosittua kategoriassa Tiede

rss-mita-tulisi-tietaa
rss-poliisin-mieli
tiedekulma-podcast
menologeja-tutkimusmatka-vaihdevuosiin
sotataidon-ytimessa
filocast-filosofian-perusteet
rss-duodecim-lehti
rss-astetta-parempi-elama-podcast
rss-lapsuuden-rakentajat-podcast
utelias-mieli
docemilia
radio-antro
rss-ranskaa-raakana
rss-kasvatuspsykologiaa-kaikille
rss-tiedetta-vai-tarinaa
rss-luontopodi-samuel-glassar-tutkii-luonnon-ihmeita
rss-sosiopodi