Annotator Bias
Data Skeptic23 Marras 2019

Annotator Bias

The modern deep learning approaches to natural language processing are voracious in their demands for large corpora to train on. Folk wisdom estimates used to be around 100k documents were required for effective training. The availability of broadly trained, general-purpose models like BERT has made it possible to do transfer learning to achieve novel results on much smaller corpora.

Thanks to these advancements, an NLP researcher might get value out of fewer examples since they can use the transfer learning to get a head start and focus on learning the nuances of the language specifically relevant to the task at hand. Thus, small specialized corpora are both useful and practical to create.

In this episode, Kyle speaks with Mor Geva, lead author on the recent paper Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets, which explores some unintended consequences of the typical procedure followed for generating corpora.

Source code for the paper available here: https://github.com/mega002/annotator_bias

Jaksot(590)

The Death of a Language

The Death of a Language

USC students from the CAIS++ student organization have created a variety of novel projects under the mission statement of "artificial intelligence for social good". In this episode, Kyle interviews Zane and Leena about the Endangered Languages Project.

1 Kesä 201920min

Neural Turing Machines

Neural Turing Machines

Kyle and Linh Da discuss the concepts behind the neural Turing machine.

25 Touko 201925min

Data Infrastructure in the Cloud

Data Infrastructure in the Cloud

Kyle chats with Rohan Kumar about hyperscale, data at the edge, and a variety of other trends in data engineering in the cloud.

18 Touko 201930min

NCAA Predictions on Spark

NCAA Predictions on Spark

In this episode, Kyle interviews Laura Edell at MS Build 2019.  The conversation covers a number of topics, notably her NCAA Final 4 prediction model.

11 Touko 201923min

The Transformer

The Transformer

Kyle and Linhda discuss attention and the transformer - an encoder/decoder architecture that extends the basic ideas of vector embeddings like word2vec into a more contextual use case.

3 Touko 201915min

Mapping Dialects with Twitter Data

Mapping Dialects with Twitter Data

When users on Twitter post with geographic tags, it creates the opportunity for a variety of interesting questions to be posed having to do with language, dialects, and location.  In this episode, Kyle interviews Bruno Gonçalves about his work studying language in this way.

26 Huhti 201925min

Sentiment Analysis

Sentiment Analysis

This is an interview with Ellen Loeshelle, Director of Product Management at Clarabridge.  We primarily discuss sentiment analysis.

20 Huhti 201927min

Attention Primer

Attention Primer

A gentle introduction to the very high-level idea of "attention" in machine learning, as it will play a major role in some upcoming episodes over the next few weeks.

13 Huhti 201914min

Suosittua kategoriassa Tiede

rss-mita-tulisi-tietaa
utelias-mieli
tiedekulma-podcast
hippokrateen-vastaanotolla
rss-poliisin-mieli
rss-lihavuudesta-podcast
rss-totta-vai-tuubaa
rss-duodecim-lehti
docemilia
radio-antro
rss-metsanomistaja-podcast
menologeja-tutkimusmatka-vaihdevuosiin
rss-astetta-parempi-elama-podcast
rss-tiedetta-vai-tarinaa
rss-ilmasto-kriisissa
rss-lapsuuden-rakentajat-podcast