Evaluating Model Explainability Methods with Sara Hooker - TWiML Talk #189

Evaluating Model Explainability Methods with Sara Hooker - TWiML Talk #189

In this, the first episode of the Deep Learning Indaba series, we’re joined by Sara Hooker, AI Resident at Google Brain. I spoke with Sara in the run-up to the Indaba about her work on interpretability in deep neural networks. We discuss what interpretability means and nuances like the distinction between interpreting model decisions vs model function. We also talk about the relationship between Google Brain and the rest of the Google AI landscape and the significance of the Google AI Lab in Accra, Ghana.

Suosittua kategoriassa Politiikka ja uutiset

rss-ootsa-kuullut-tasta
aikalisa
tervo-halme
ootsa-kuullut-tasta-2
politiikan-puskaradio
viisupodi
rss-podme-livebox
et-sa-noin-voi-sanoo-esittaa
otetaan-yhdet
rss-vaalirankkurit-podcast
aihe
the-ulkopolitist
rss-polikulaari-humanisti-vastaa-ja-muut-ts-podcastit
rss-hyvaa-huomenta-bryssel
rss-kuka-mina-olen
politbyroo
linda-maria
rss-lets-talk-about-hair
rss-50100-podcast
rss-tekoalyfoorumi