Annotator Bias
Data Skeptic23 Nov 2019

Annotator Bias

The modern deep learning approaches to natural language processing are voracious in their demands for large corpora to train on. Folk wisdom estimates used to be around 100k documents were required for effective training. The availability of broadly trained, general-purpose models like BERT has made it possible to do transfer learning to achieve novel results on much smaller corpora.

Thanks to these advancements, an NLP researcher might get value out of fewer examples since they can use the transfer learning to get a head start and focus on learning the nuances of the language specifically relevant to the task at hand. Thus, small specialized corpora are both useful and practical to create.

In this episode, Kyle speaks with Mor Geva, lead author on the recent paper Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets, which explores some unintended consequences of the typical procedure followed for generating corpora.

Source code for the paper available here: https://github.com/mega002/annotator_bias

Avsnitt(590)

Why Prompting is Hard

Why Prompting is Hard

We are excited to be joined by J.D. Zamfirescu-Pereira, a Ph.D. student at UC Berkeley. He focuses on the intersection of human-computer interaction (HCI) and artificial intelligence (AI). He joins us to share his work in his paper, Why Johnny can't prompt: how non-AI experts try (and fail) to design LLM prompts. The discussion also explores lessons learned and achievements related to BotDesigner, a tool for creating chat bots.

8 Aug 202348min

Automated Peer Review

Automated Peer Review

In this episode, we are joined by Ryan Liu, a Computer Science graduate of Carnegie Mellon University. Ryan will begin his Ph.D. program at Princeton University this fall. His Ph.D. will focus on the intersection of large language models and how humans think. Ryan joins us to discuss his research titled "ReviewerGPT? An Exploratory Study on Using Large Language Models for Paper Reviewing"

31 Juli 202336min

Prompt Refusal

Prompt Refusal

The creators of large language models impose restrictions on some of the types of requests one might make of them. LLMs commonly refuse to give advice on committing crimes, producting adult content, or respond with any details about a variety of sensitive subjects. As with any content filtering system, you have false positives and false negatives. Today's interview with Max Reuter and William Schulze discusses their paper "I'm Afraid I Can't Do That: Predicting Prompt Refusal in Black-Box Generative Language Models". In this work, they explore what types of prompts get refused and build a machine learning classifier adept at predicting if a particular prompt will be refused or not.

24 Juli 202344min

A Long Way Till AGI

A Long Way Till AGI

Our guest today is Maciej Świechowski. Maciej is affiliated with QED Software and QED Games. He has a Ph.D. in Systems Research from the Polish Academy of Sciences. Maciej joins us to discuss findings from his study, Deep Learning and Artificial General Intelligence: Still a Long Way to Go.

18 Juli 202337min

Brain Inspired AI

Brain Inspired AI

Today on the show, we are joined by Lin Zhao and Lu Zhang. Lin is a Senior Research Scientist at United Imaging Intelligence, while Lu is a Ph.D. candidate at the Department of Computer Science and Engineering at the University of Texas. They both shared findings from their work When Brain-inspired AI Meets AGI. Lin and Lu began by discussing the connections between the brain and neural networks. They mentioned the similarities as well as the differences. They also shared whether there is a possibility for solid advancements in neural networks to the point of AGI. They shared how understanding the brain more can help drive robust artificial intelligence systems. Lin and Lu shared how the brain inspired popular machine learning algorithms like transformers. They also shared how AI models can learn alignment from the human brain. They juxtaposed the low energy usage of the brain compared to high-end computers and whether computers can become more energy efficient.

11 Juli 202336min

Computable AGI

Computable AGI

On today's show, we are joined by Michael Timothy Bennett, a Ph.D. student at the Australian National University. Michael's research is centered around Artificial General Intelligence (AGI), specifically the mathematical formalism of AGIs. He joins us to discuss findings from his study, Computable Artificial General Intelligence.

3 Juli 202336min

AGI Can Be Safe

AGI Can Be Safe

We are joined by Koen Holtman, an independent AI researcher focusing on AI safety. Koen is the Founder of Holtman Systems Research, a research company based in the Netherlands. Koen started the conversation with his take on an AI apocalypse in the coming years. He discussed the obedience problem with AI models and the safe form of obedience. Koen explained the concept of Markov Decision Process (MDP) and how it is used to build machine learning models. Koen spoke about the problem of AGIs not being able to allow changing their utility function after the model is deployed. He shared another alternative approach to solving the problem. He shared how to engineer AGI systems now and in the future safely. He also spoke about how to implement safety layers on AI models. Koen discussed the ultimate goal of a safe AI system and how to check that an AI system is indeed safe. He discussed the intersection between large language Models (LLMs) and MDPs. He shared the key ingredients to scale the current AI implementations.

26 Juni 202345min

AI Fails on Theory of Mind Tasks

AI Fails on Theory of Mind Tasks

An assistant professor of Psychology at Harvard University, Tomer Ullman, joins us. Tomer discussed the theory of mind and whether machines can indeed pass it. Using variations of the Sally-Anne test and the Smarties tube test, he explained how LLMs could fail the theory of mind test.

19 Juni 202352min

Populärt inom Vetenskap

p3-dystopia
dumma-manniskor
svd-nyhetsartiklar
allt-du-velat-veta
doden-hjarnan-kemisten
kapitalet-en-podd-om-ekonomi
rss-ufobortom-rimligt-tvivel
dumforklarat
paranormalt-med-caroline-giertz
sexet
rss-vetenskapsradion
medicinvetarna
det-morka-psyket
rss-personlighetspodden
rss-vetenskapsradion-2
rss-vetenskapspodden
rss-spraket
bildningspodden
barnpsykologerna
rss-i-hjarnan-pa-louise-epstein