Annotator Bias
Data Skeptic23 Nov 2019

Annotator Bias

The modern deep learning approaches to natural language processing are voracious in their demands for large corpora to train on. Folk wisdom estimates used to be around 100k documents were required for effective training. The availability of broadly trained, general-purpose models like BERT has made it possible to do transfer learning to achieve novel results on much smaller corpora.

Thanks to these advancements, an NLP researcher might get value out of fewer examples since they can use the transfer learning to get a head start and focus on learning the nuances of the language specifically relevant to the task at hand. Thus, small specialized corpora are both useful and practical to create.

In this episode, Kyle speaks with Mor Geva, lead author on the recent paper Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets, which explores some unintended consequences of the typical procedure followed for generating corpora.

Source code for the paper available here: https://github.com/mega002/annotator_bias

Denne episoden er hentet fra en åpen RSS-feed og er ikke publisert av Podme. Den kan derfor inneholde annonser.

Episoder(601)

Populært innen Vitenskap

fastlegen
tingenes-tilstand
jss
forskningno
rss-zahid-ali-hjelper-deg
rekommandert
rss-paradigmepodden
sinnsyn
liberal-halvtime
vett-og-vitenskap-med-gaute-einevoll
rss-overskuddsliv
kvinnehelsepodden
nordnorsk-historie
tidlose-historier
villmarksliv
grunnstoffene
rss-inn-til-kjernen-med-sunniva-rose
nevropodden
noen-har-snakket-sammen
fjellsportpodden