Black Boxes Are Not Required
Data Skeptic5 Kesä 2020

Black Boxes Are Not Required

Deep neural networks are undeniably effective. They rely on such a high number of parameters, that they are appropriately described as "black boxes".

While black boxes lack desirably properties like interpretability and explainability, in some cases, their accuracy makes them incredibly useful.

But does achiving "usefulness" require a black box? Can we be sure an equally valid but simpler solution does not exist?

Cynthia Rudin helps us answer that question. We discuss her recent paper with co-author Joanna Radin titled (spoiler warning)…

Why Are We Using Black Box Models in AI When We Don't Need To? A Lesson From An Explainable AI Competition




Jaksot(590)

Indigenous American Language Research

Indigenous American Language Research

Manuel Mager joins us to discuss natural language processing for low and under-resourced languages.  We discuss current work in this area and the Naki Project which aggregates research on NLP for native and indigenous languages of the American continent.

13 Marras 201922min

Talking to GPT-2

Talking to GPT-2

GPT-2 is yet another in a succession of models like ELMo and BERT which adopt a similar deep learning architecture and train an unsupervised model on a massive text corpus. As we have been covering recently, these approaches are showing tremendous promise, but how close are they to an AGI?  Our guest today, Vazgen Davidyants wondered exactly that, and have conversations with a Chatbot running GPT-2.  We discuss his experiences as well as some novel thoughts on artificial intelligence.

31 Loka 201929min

Reproducing Deep Learning Models

Reproducing Deep Learning Models

Rajiv Shah attempted to reproduce an earthquake-predicting deep learning model.  His results exposed some issues with the model.  Kyle and Rajiv discuss the original paper and Rajiv's analysis.

23 Loka 201922min

What BERT is Not

What BERT is Not

Allyson Ettinger joins us to discuss her work in computational linguistics, specifically in exploring some of the ways in which the popular natural language processing approach BERT has limitations.

14 Loka 201927min

SpanBERT

SpanBERT

Omer Levy joins us to discuss "SpanBERT: Improving Pre-training by Representing and Predicting Spans". https://arxiv.org/abs/1907.10529

8 Loka 201924min

BERT is Shallow

BERT is Shallow

Tim Niven joins us this week to discuss his work exploring the limits of what BERT can do on certain natural language tasks such as adversarial attacks, compositional learning, and systematic learning.

23 Syys 201920min

BERT is Magic

BERT is Magic

Kyle pontificates on how impressed he is with BERT.

16 Syys 201918min

Applied Data Science in Industry

Applied Data Science in Industry

Kyle sits down with Jen Stirrup to inquire about her experiences helping companies deploy data science solutions in a variety of different settings.

6 Syys 201921min

Suosittua kategoriassa Tiede

rss-mita-tulisi-tietaa
utelias-mieli
tiedekulma-podcast
hippokrateen-vastaanotolla
rss-lihavuudesta-podcast
rss-poliisin-mieli
rss-totta-vai-tuubaa
menologeja-tutkimusmatka-vaihdevuosiin
rss-duodecim-lehti
rss-metsanomistaja-podcast
docemilia
radio-antro
rss-bios-podcast
rss-astetta-parempi-elama-podcast
rss-radplus
rss-ilmasto-kriisissa