AI lab TL;DR | Paul Keller - A Vocabulary for Opting Out of AI Training and TDM

AI lab TL;DR | Paul Keller - A Vocabulary for Opting Out of AI Training and TDM

🔍 In this TL;DR episode, Paul Keller (The Open Future Foundation) outlines a proposal for a common opt-out vocabulary to improve how EU copyright rules apply to AI training. The discussion introduces three clear use cases—TDM, AI training, and generative AI training—to help rights holders express their preferences more precisely. By standardizing terminology across the value chain, the proposal aims to bring legal clarity, promote interoperability, and support responsible AI development.


📌 TL;DR Highlights

⏲️[00:00] Intro

⏲️[00:41] Q1-Why is this vocabulary needed for AI training opt-outs?

⏲️[04:17] Q2-How does it help creators, AI developers, and policymakers and what are some of the concepts?

⏲️[11:55] Q3-What are its limitations, and how could it evolve?

⏲️[14:35] Wrap-up & Outro


💭 Q1 - Why is this vocabulary needed for AI training opt-outs?


🗣️ "At the core of the EU copyright framework is... the TDM exceptions – the exceptions for text and data mining that were introduced in the 2019 Copyright Directive."

🗣️ "It ensures that rights holders have some level of control over their works, and it makes sure that the majority of publicly available works are available to innovate on top of, to build new things."

🗣️ "The purpose of such a vocabulary is to provide a common language for expressing rights reservations and opt-outs that are understood in the same way along the entire value chain."

🗣️ "This vocabulary proposal is the outcome of discussions that we had with many stakeholders, including rights holders, AI companies, policymakers, academics, and public interest technologists."


💭 Q2 - How does it help creators, AI developers, and policymakers and what are some of the concepts?


🗣️ "At the very core, the idea of vocabulary is that you have some common understanding of language... that terms you use mean the same to other people that you deal with."

🗣️ "We offer these three use cases for people to target their opt-outs from... like sort of the Russian dolls: the wide TDM category that is AI training, and in that is generative AI training."

🗣️ "If all of these technologies sort of use the same definition of what they are opting out, it becomes interoperable and it becomes also relatively simple to understand on the rights holder side."


💭 Q3 - What are its limitations, and how could it evolve?


🗣️ "The biggest limitation is... we need to see if this lands in reality and stakeholders start working with this."

🗣️ "These information intermediaries... essentially convey the information from rights holders to model providers—then it has a chance to become something that structures this field."

🗣️ "It is designed as a sort of very simple, relatively flexible approach that makes it expandable."


📌 About Our Guest

🎙️ Paul Keller | The Open Future Foundation

🌐 Article | A Vocabulary for opting out of AI training and other forms of TDM

https://openfuture.eu/wp-content/uploads/2025/03/250307_Vocabulary_for_opting_out_of_AI_training_and_other_forms_of_TDM.pdf

🌐 Paul Keller

https://www.linkedin.com/in/paulkeller/


Paul Keller is the co-Founder and Director of Policy at the Open Future Foundation, a European nonprofit organization. He has extensive experience as a media activist, open policy advocate and systems architect striving to improve access to knowledge and culture.

#AI #ArtificialIntelligence #GenerativeAI

Episoder(37)

AI lab TL;DR | Anna Tumadóttir - Rethinking Creator Consent in the Age of AI

AI lab TL;DR | Anna Tumadóttir - Rethinking Creator Consent in the Age of AI

🔍 In this TL;DR episode, Anna Tumadóttir (Creative Commons) discusses how the evolution of creator consent and AI has reshaped perspectives on openness, highlighting the challenges of balancing creat...

10 Feb 202519min

 AI lab TL;DR | Carys J. Craig - The Copyright Trap and AI Policy

AI lab TL;DR | Carys J. Craig - The Copyright Trap and AI Policy

🔍 In this TL;DR episode, Carys J Craig (Osgoode Professional Development) explains the "copyright trap" in AI regulation, where relying on copyright favors corporate interests over creativity. She ch...

27 Jan 202529min

AI lab TL;DR | Ariadna Matas - Should Institutions Enable or Prevent Cultural Data Mining?

AI lab TL;DR | Ariadna Matas - Should Institutions Enable or Prevent Cultural Data Mining?

🔍 In this TL;DR episode, Ariadna Matas (Europeana Foundation) discusses how the 2019 Copyright Directive has influenced text and data mining practices in cultural heritage institutions, highlighting ...

13 Jan 202516min

AI lab TL;DR | Martin Senftleben - How Copyright Challenges AI Innovation and Creativity

AI lab TL;DR | Martin Senftleben - How Copyright Challenges AI Innovation and Creativity

🔍 In this TL;DR episode, Martin Senftleben (Institute for Information Law (IViR) & University of Amsterdam) discusses how EU regulations, including the AI Act and copyright frameworks, impose heavy b...

16 Des 202410min

AI lab TL;DR | Mark Lemley - How Generative AI Disrupts Traditional Copyright Law

AI lab TL;DR | Mark Lemley - How Generative AI Disrupts Traditional Copyright Law

🔍 In this TL;DR episode, Mark Lemley (Stanford Law School) discusses how generative AI challenges traditional copyright doctrines, such as the idea-expression dichotomy and substantial similarity tes...

25 Nov 20248min

AI lab TL;DR | Jacob Mchangama - Are AI Chatbot Restrictions Threatening Free Speech?

AI lab TL;DR | Jacob Mchangama - Are AI Chatbot Restrictions Threatening Free Speech?

🔍 In this TL;DR episode, Jacob Mchangama (The Future of Free Speech & Vanderbilt University) discusses the high rate of AI chatbot refusals to generate content for controversial prompts, examining ho...

4 Nov 202416min

AI lab TL;DR | Jurgen Gravestein - The Intelligence Paradox

AI lab TL;DR | Jurgen Gravestein - The Intelligence Paradox

🔍 In this TL;DR episode, Jurgen Gravestein (Conversation Design Institute) discusses his Substack blog post delving into the ‘Intelligence Paradox’ with the AI lab📌 TL;DR Highlights⏲️[00:00] Intro⏲️...

21 Okt 202414min

AI lab TL;DR | Stefaan G. Verhulst - Are we entering a Data Winter?

AI lab TL;DR | Stefaan G. Verhulst - Are we entering a Data Winter?

🔍 In this TL;DR episode, Dr. Stefaan G. Verhulst (The GovLab & The Data Tank) discusses his Frontiers Policy Labs contribution on the urgent need to preserve data access for the public interest with ...

30 Sep 202412min

Populært innen Vitenskap

fastlegen
tingenes-tilstand
rekommandert
forskningno
rss-nysgjerrige-norge
sinnsyn
rss-rekommandert
vett-og-vitenskap-med-gaute-einevoll
smart-forklart
pod-britannia
fjellsportpodden
liberal-halvtime
jss
villmarksliv
nevropodden
aldring-og-helse-podden
rss-overskuddsliv
psykopoden
tomprat-med-gunnar-tjomlid
dekodet-2