Black Boxes Are Not Required
Data Skeptic5 Juni 2020

Black Boxes Are Not Required

Deep neural networks are undeniably effective. They rely on such a high number of parameters, that they are appropriately described as "black boxes".

While black boxes lack desirably properties like interpretability and explainability, in some cases, their accuracy makes them incredibly useful.

But does achiving "usefulness" require a black box? Can we be sure an equally valid but simpler solution does not exist?

Cynthia Rudin helps us answer that question. We discuss her recent paper with co-author Joanna Radin titled (spoiler warning)…

Why Are We Using Black Box Models in AI When We Don't Need To? A Lesson From An Explainable AI Competition




Avsnitt(590)

Building the howto100m Video Corpus

Building the howto100m Video Corpus

Video annotation is an expensive and time-consuming process. As a consequence, the available video datasets are useful but small. The availability of machine transcribed explainer videos offers a unique opportunity to rapidly develop a useful, if dirty, corpus of videos that are "self annotating", as hosts explain the actions they are taking on the screen. This episode is a discussion of the HowTo100m dataset - a project which has assembled a video corpus of 136M video clips with captions covering 23k activities. Related Links The paper will be presented at ICCV 2019 @antoine77340 Antoine on Github Antoine's homepage

19 Aug 201922min

BERT

BERT

Kyle provides a non-technical overview of why Bidirectional Encoder Representations from Transformers (BERT) is a powerful tool for natural language processing projects.

29 Juli 201913min

Onnx

Onnx

Kyle interviews Prasanth Pulavarthi about the Onnx format for deep neural networks.

22 Juli 201920min

Catastrophic Forgetting

Catastrophic Forgetting

Kyle and Linhda discuss some high level theory of mind and overview the concept machine learning concept of catastrophic forgetting.

15 Juli 201921min

Transfer Learning

Transfer Learning

Sebastian Ruder is a research scientist at DeepMind.  In this episode, he joins us to discuss the state of the art in transfer learning and his contributions to it.

8 Juli 201929min

Facebook Bargaining Bots Invented a Language

Facebook Bargaining Bots Invented a Language

In 2017, Facebook published a paper called Deal or No Deal? End-to-End Learning for Negotiation Dialogues. In this research, the reinforcement learning agents developed a mechanism of communication (which could be called a language) that made them able to optimize their scores in the negotiation game. Many media sources reported this as if it were a first step towards Skynet taking over. In this episode, Kyle discusses bargaining agents and the actual results of this research.

21 Juni 201923min

Under Resourced Languages

Under Resourced Languages

Priyanka Biswas joins us in this episode to discuss natural language processing for languages that do not have as many resources as those that are more commonly studied such as English.  Successful NLP projects benefit from the availability of like large corpora, well-annotated corpora, software libraries, and pre-trained models.  For languages that researchers have not paid as much attention to, these tools are not always available.

15 Juni 201916min

Named Entity Recognition

Named Entity Recognition

Kyle and Linh Da discuss the class of approaches called "Named Entity Recognition" or NER.  NER algorithms take any string as input and return a list of "entities" - specific facts and agents in the text along with a classification of the type (e.g. person, date, place).

8 Juni 201917min

Populärt inom Vetenskap

dumma-manniskor
p3-dystopia
svd-nyhetsartiklar
doden-hjarnan-kemisten
allt-du-velat-veta
kapitalet-en-podd-om-ekonomi
rss-ufobortom-rimligt-tvivel
dumforklarat
sexet
det-morka-psyket
rss-i-hjarnan-pa-louise-epstein
rss-vetenskapsradion
paranormalt-med-caroline-giertz
rss-vetenskapsradion-2
rss-vetenskapspodden
bildningspodden
medicinvetarna
rss-personlighetspodden
rss-spraket
barnpsykologerna