Annotator Bias
Data Skeptic23 Nov 2019

Annotator Bias

The modern deep learning approaches to natural language processing are voracious in their demands for large corpora to train on. Folk wisdom estimates used to be around 100k documents were required for effective training. The availability of broadly trained, general-purpose models like BERT has made it possible to do transfer learning to achieve novel results on much smaller corpora.

Thanks to these advancements, an NLP researcher might get value out of fewer examples since they can use the transfer learning to get a head start and focus on learning the nuances of the language specifically relevant to the task at hand. Thus, small specialized corpora are both useful and practical to create.

In this episode, Kyle speaks with Mor Geva, lead author on the recent paper Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets, which explores some unintended consequences of the typical procedure followed for generating corpora.

Source code for the paper available here: https://github.com/mega002/annotator_bias

Episoder(590)

[MINI] Bayesian Updating

[MINI] Bayesian Updating

In this minisode, we discuss Bayesian Updating - the process by which one can calculate the most likely hypothesis might be true given one's older / prior belief and all new evidence.

27 Jun 201411min

Personalized Medicine with Niki Athanasiadou

Personalized Medicine with Niki Athanasiadou

In the second full length episode of the podcast, we discuss the current state of personalized medicine and the advancements in genetics that have made it possible.

20 Jun 201457min

[MINI] p-values

[MINI] p-values

In this mini, we discuss p-values and their use in hypothesis testing, in the context of an hypothetical experiment on plant flowering, and end with a reference to the Particle Fever documentary and how statistical significance played a role.

13 Jun 201416min

Advertising Attribution with Nathan Janos

Advertising Attribution with Nathan Janos

A conversation with Convertro's Nathan Janos about methodologies used to help advertisers understand the affect each of their marketing efforts (print, SEM, display, skywriting, etc.) contributes to their overall return.

6 Jun 20141h 16min

[MINI] type i / type ii errors

[MINI] type i / type ii errors

In this first mini-episode of the Data Skeptic Podcast, we define and discuss type i and type ii errors (a.k.a. false positives and false negatives).

30 Mai 201411min

Introduction

Introduction

The Data Skeptic Podcast features conversations with topics related to data science, statistics, machine learning, artificial intelligence and the like, all from the perspective of applying critical thinking and the scientific method to evaluate the veracity of claims and efficacy of approaches. This first episode is a short discussion about what this podcast is all about.

23 Mai 20143min

Populært innen Vitenskap

fastlegen
fremtid-pa-frys
rekommandert
tingenes-tilstand
jss
rss-rekommandert
tomprat-med-gunnar-tjomlid
vett-og-vitenskap-med-gaute-einevoll
sinnsyn
villmarksliv
rss-paradigmepodden
forskningno
nordnorsk-historie
rss-overskuddsliv
dekodet-2
rss-nysgjerrige-norge
fjellsportpodden
doktor-fives-podcast
tidlose-historier
nevropodden