Trusting Machine Learning Models with LIME
Data Skeptic19 Aug 2016

Trusting Machine Learning Models with LIME

Machine learning models are often criticized for being black boxes. If a human cannot determine why the model arrives at the decision it made, there's good cause for skepticism. Classic inspection approaches to model interpretability are only useful for simple models, which are likely to only cover simple problems.

The LIME project seeks to help us trust machine learning models. At a high level, it takes advantage of local fidelity. For a given example, a separate model trained on neighbors of the example are likely to reveal the relevant features in the local input space to reveal details about why the model arrives at it's conclusion.

In this episode, Marco Tulio Ribeiro joins us to discuss how LIME (Locally Interpretable Model-Agnostic Explanations) can help users trust machine learning models. The accompanying paper is titled "Why Should I Trust You?": Explaining the Predictions of Any Classifier.

Denne episoden er hentet fra en åpen RSS-feed og er ikke publisert av Podme. Den kan derfor inneholde annonser.

Episoder(601)

LLMs in Music Composition

LLMs in Music Composition

In this episode, we are joined by Carlos Hernández Oliván, a Ph.D. student at the University of Zaragoza. Carlos's interest focuses on building new models for symbolic music generation. Carlos shared ...

28 Aug 202333min

Cuttlefish Model Tuning

Cuttlefish Model Tuning

Hongyi Wang, a Senior Researcher at the Machine Learning Department at Carnegie Mellon University, joins us. His research is in the intersection of systems and machine learning. He discussed his resea...

21 Aug 202327min

Which Professions Are Threatened by LLMs

Which Professions Are Threatened by LLMs

On today's episode, we have Daniel Rock, an Assistant Professor of Operations Information and Decisions at the Wharton School of the University of Pennsylvania. Daniel's research focuses on the econom...

15 Aug 202338min

Why Prompting is Hard

Why Prompting is Hard

We are excited to be joined by J.D. Zamfirescu-Pereira, a Ph.D. student at UC Berkeley. He focuses on the intersection of human-computer interaction (HCI) and artificial intelligence (AI). He joins us...

8 Aug 202348min

Automated Peer Review

Automated Peer Review

In this episode, we are joined by Ryan Liu, a Computer Science graduate of Carnegie Mellon University. Ryan will begin his Ph.D. program at Princeton University this fall. His Ph.D. will focus on the ...

31 Jul 202336min

Prompt Refusal

Prompt Refusal

The creators of large language models impose restrictions on some of the types of requests one might make of them.  LLMs commonly refuse to give advice on committing crimes, producting adult content, ...

24 Jul 202344min

A Long Way Till AGI

A Long Way Till AGI

Our guest today is Maciej Świechowski. Maciej is affiliated with QED Software and QED Games. He has a Ph.D. in Systems Research from the Polish Academy of Sciences. Maciej joins us to discuss findings...

18 Jul 202337min

Brain Inspired AI

Brain Inspired AI

Today on the show, we are joined by Lin Zhao and Lu Zhang. Lin is a Senior Research Scientist at United Imaging Intelligence, while Lu is a Ph.D. candidate at the Department of Computer Science and En...

11 Jul 202336min

Populært innen Vitenskap

fastlegen
tingenes-tilstand
jss
forskningno
rekommandert
rss-zahid-ali-hjelper-deg
rss-paradigmepodden
sinnsyn
vett-og-vitenskap-med-gaute-einevoll
rss-overskuddsliv
nordnorsk-historie
kvinnehelsepodden
tidlose-historier
villmarksliv
liberal-halvtime
rss-inn-til-kjernen-med-sunniva-rose
fjellsportpodden
grunnstoffene
nevropodden
rss-rekommandert