2015 Holiday Special
Data Skeptic25 Joulu 2015

2015 Holiday Special

Today's episode is a reading of Isaac Asimov's The Machine that Won the War. I can't think of a story that's more appropriate for Data Skeptic.

Jaksot(589)

Human Computer Interaction and Online Privacy

Human Computer Interaction and Online Privacy

Moses Namara from the HATLab joins us to discuss his research into the interaction between privacy and human-computer interaction.

27 Heinä 202032min

Authorship Attribution of Lennon McCartney Songs

Authorship Attribution of Lennon McCartney Songs

Mark Glickman joins us to discuss the paper Data in the Life: Authorship Attribution in Lennon-McCartney Songs.

20 Heinä 202033min

GANs Can Be Interpretable

GANs Can Be Interpretable

Erik Härkönen joins us to discuss the paper GANSpace: Discovering Interpretable GAN Controls. During the interview, Kyle makes reference to this amazing interpretable GAN controls video and it's accompanying codebase found here. Erik mentions the GANspace collab notebook which is a rapid way to try these ideas out for yourself.

11 Heinä 202026min

Sentiment Preserving Fake Reviews

Sentiment Preserving Fake Reviews

David Ifeoluwa Adelani joins us to discuss Generating Sentiment-Preserving Fake Online Reviews Using Neural Language Models and Their Human- and Machine-based Detection.

6 Heinä 202028min

Interpretability Practitioners

Interpretability Practitioners

Sungsoo Ray Hong joins us to discuss the paper Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs.

26 Kesä 202032min

Facial Recognition Auditing

Facial Recognition Auditing

Deb Raji joins us to discuss her recent publication Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing.

19 Kesä 202047min

Robust Fit to Nature

Robust Fit to Nature

Uri Hasson joins us this week to discuss the paper Robust-fit to Nature: An Evolutionary Perspective on Biological (and Artificial) Neural Networks.

12 Kesä 202038min

Black Boxes Are Not Required

Black Boxes Are Not Required

Deep neural networks are undeniably effective. They rely on such a high number of parameters, that they are appropriately described as "black boxes". While black boxes lack desirably properties like interpretability and explainability, in some cases, their accuracy makes them incredibly useful. But does achiving "usefulness" require a black box? Can we be sure an equally valid but simpler solution does not exist? Cynthia Rudin helps us answer that question. We discuss her recent paper with co-author Joanna Radin titled (spoiler warning)… Why Are We Using Black Box Models in AI When We Don't Need To? A Lesson From An Explainable AI Competition

5 Kesä 202032min

Suosittua kategoriassa Tiede

rss-mita-tulisi-tietaa
utelias-mieli
rss-poliisin-mieli
tiedekulma-podcast
hippokrateen-vastaanotolla
docemilia
rss-lihavuudesta-podcast
filocast-filosofian-perusteet
rss-duodecim-lehti
mielipaivakirja
radio-antro
rss-totta-vai-tuubaa
rss-astetta-parempi-elama-podcast
sotataidon-ytimessa
rss-tiedetta-vai-tarinaa
rss-ilmasto-kriisissa
rss-ihmisen-aani
rss-ylistys-elaimille
rss-luontopodi-samuel-glassar-tutkii-luonnon-ihmeita
rss-lapsuuden-rakentajat-podcast