Trusting Machine Learning Models with LIME
Data Skeptic19 Elo 2016

Trusting Machine Learning Models with LIME

Machine learning models are often criticized for being black boxes. If a human cannot determine why the model arrives at the decision it made, there's good cause for skepticism. Classic inspection approaches to model interpretability are only useful for simple models, which are likely to only cover simple problems.

The LIME project seeks to help us trust machine learning models. At a high level, it takes advantage of local fidelity. For a given example, a separate model trained on neighbors of the example are likely to reveal the relevant features in the local input space to reveal details about why the model arrives at it's conclusion.

In this episode, Marco Tulio Ribeiro joins us to discuss how LIME (Locally Interpretable Model-Agnostic Explanations) can help users trust machine learning models. The accompanying paper is titled "Why Should I Trust You?": Explaining the Predictions of Any Classifier.

Tämä jakso on lisätty Podme-palveluun avoimen RSS-syötteen kautta eikä se ole Podmen omaa tuotantoa. Siksi jakso saattaa sisältää mainontaa.

Jaksot(601)

Computable AGI

Computable AGI

On today's show, we are joined by Michael Timothy Bennett, a Ph.D. student at the Australian National University. Michael's research is centered around Artificial General Intelligence (AGI), specifica...

3 Heinä 202336min

AGI Can Be Safe

AGI Can Be Safe

We are joined by Koen Holtman, an independent AI researcher focusing on AI safety. Koen is the Founder of Holtman Systems Research, a research company based in the Netherlands. Koen started the conver...

26 Kesä 202345min

AI Fails on Theory of Mind Tasks

AI Fails on Theory of Mind Tasks

An assistant professor of Psychology at Harvard University, Tomer Ullman, joins us. Tomer discussed the theory of mind and whether machines can indeed pass it. Using variations of the Sally-Anne test ...

19 Kesä 202352min

AI for Mathematics Education

AI for Mathematics Education

The application of LLMs cuts across various industries. Today, we are joined by Steven Van Vaerenbergh, who discussed the application of AI in mathematics education. He discussed how AI tools have cha...

12 Kesä 202335min

Evaluating Jokes with LLMs

Evaluating Jokes with LLMs

Fabricio Goes, a Lecturer in Creative Computing at the University of Leicester, joins us today. Fabricio discussed what creativity entails and how to evaluate jokes with LLMs. He specifically shared t...

6 Kesä 202343min

Why Machines Will Never Rule the World

Why Machines Will Never Rule the World

Barry Smith and Jobst Landgrebe, authors of the book "Why Machines will never Rule the World," join us today. They discussed the limitations of AI systems in today's world. They also shared elaborate ...

29 Touko 202355min

A Psychopathological Approach to Safety in AGI

A Psychopathological Approach to Safety in AGI

While the possibilities with AGI emergence seem great, it also calls for safety concerns. On the show, Vahid Behzadan, an Assistant Professor of Computer Science and Data Science, joins us to discuss ...

23 Touko 202349min

The NLP Community Metasurvey

The NLP Community Metasurvey

Julian Michael, a postdoc at the Center for Data Science, New York University, joins us today. Julian's conversation with Kyle was centered on the NLP community metasurvey: a survey aimed at understan...

15 Touko 202349min

Suosittua kategoriassa Tiede

rss-poliisin-mieli
tiedekulma-podcast
rss-mita-tulisi-tietaa
docemilia
filocast-filosofian-perusteet
menologeja-tutkimusmatka-vaihdevuosiin
rss-duodecim-lehti
sotataidon-ytimessa
rss-tiedetta-vai-tarinaa
utelias-mieli
radio-antro
rss-bios-podcast
rss-ranskaa-raakana
rss-kasvatuspsykologiaa-kaikille
rss-luontopodi-samuel-glassar-tutkii-luonnon-ihmeita
rss-lapsuuden-rakentajat-podcast
rss-sosiopodi