
Interpretability Practitioners
Sungsoo Ray Hong joins us to discuss the paper Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs.
26 Juni 202032min

Facial Recognition Auditing
Deb Raji joins us to discuss her recent publication Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing.
19 Juni 202047min

Robust Fit to Nature
Uri Hasson joins us this week to discuss the paper Robust-fit to Nature: An Evolutionary Perspective on Biological (and Artificial) Neural Networks.
12 Juni 202038min

Black Boxes Are Not Required
Deep neural networks are undeniably effective. They rely on such a high number of parameters, that they are appropriately described as "black boxes". While black boxes lack desirably properties like i...
5 Juni 202032min

Robustness to Unforeseen Adversarial Attacks
Daniel Kang joins us to discuss the paper Testing Robustness Against Unforeseen Adversaries.
30 Maj 202021min

Estimating the Size of Language Acquisition
Frank Mollica joins us to discuss the paper Humans store about 1.5 megabytes of information during language acquisition
22 Maj 202025min

Interpretable AI in Healthcare
Jayaraman Thiagarajan joins us to discuss the recent paper Calibrating Healthcare AI: Towards Reliable and Interpretable Deep Predictive Models.
15 Maj 202035min

Understanding Neural Networks
What does it mean to understand a neural network? That's the question posted on this arXiv paper. Kyle speaks with Tim Lillicrap about this and several other big questions.
8 Maj 202034min


















