AI lab TL;DR | Paul Keller - A Vocabulary for Opting Out of AI Training and TDM

AI lab TL;DR | Paul Keller - A Vocabulary for Opting Out of AI Training and TDM

🔍 In this TL;DR episode, Paul Keller (The Open Future Foundation) outlines a proposal for a common opt-out vocabulary to improve how EU copyright rules apply to AI training. The discussion introduces three clear use cases—TDM, AI training, and generative AI training—to help rights holders express their preferences more precisely. By standardizing terminology across the value chain, the proposal aims to bring legal clarity, promote interoperability, and support responsible AI development.


📌 TL;DR Highlights

⏲️[00:00] Intro

⏲️[00:41] Q1-Why is this vocabulary needed for AI training opt-outs?

⏲️[04:17] Q2-How does it help creators, AI developers, and policymakers and what are some of the concepts?

⏲️[11:55] Q3-What are its limitations, and how could it evolve?

⏲️[14:35] Wrap-up & Outro


💭 Q1 - Why is this vocabulary needed for AI training opt-outs?


🗣️ "At the core of the EU copyright framework is... the TDM exceptions – the exceptions for text and data mining that were introduced in the 2019 Copyright Directive."

🗣️ "It ensures that rights holders have some level of control over their works, and it makes sure that the majority of publicly available works are available to innovate on top of, to build new things."

🗣️ "The purpose of such a vocabulary is to provide a common language for expressing rights reservations and opt-outs that are understood in the same way along the entire value chain."

🗣️ "This vocabulary proposal is the outcome of discussions that we had with many stakeholders, including rights holders, AI companies, policymakers, academics, and public interest technologists."


💭 Q2 - How does it help creators, AI developers, and policymakers and what are some of the concepts?


🗣️ "At the very core, the idea of vocabulary is that you have some common understanding of language... that terms you use mean the same to other people that you deal with."

🗣️ "We offer these three use cases for people to target their opt-outs from... like sort of the Russian dolls: the wide TDM category that is AI training, and in that is generative AI training."

🗣️ "If all of these technologies sort of use the same definition of what they are opting out, it becomes interoperable and it becomes also relatively simple to understand on the rights holder side."


💭 Q3 - What are its limitations, and how could it evolve?


🗣️ "The biggest limitation is... we need to see if this lands in reality and stakeholders start working with this."

🗣️ "These information intermediaries... essentially convey the information from rights holders to model providers—then it has a chance to become something that structures this field."

🗣️ "It is designed as a sort of very simple, relatively flexible approach that makes it expandable."


📌 About Our Guest

🎙️ Paul Keller | The Open Future Foundation

🌐 Article | A Vocabulary for opting out of AI training and other forms of TDM

https://openfuture.eu/wp-content/uploads/2025/03/250307_Vocabulary_for_opting_out_of_AI_training_and_other_forms_of_TDM.pdf

🌐 Paul Keller

https://www.linkedin.com/in/paulkeller/


Paul Keller is the co-Founder and Director of Policy at the Open Future Foundation, a European nonprofit organization. He has extensive experience as a media activist, open policy advocate and systems architect striving to improve access to knowledge and culture.

#AI #ArtificialIntelligence #GenerativeAI

Jaksot(37)

AI lab TL;DR | Žiga Turk - Brussels is About to Protect Citizens from Intelligence

AI lab TL;DR | Žiga Turk - Brussels is About to Protect Citizens from Intelligence

🔍 In this TL;DR episode, Professor Žiga Turk (University of Ljubljana, Slovenia) discusses his recent contribution for the Wilfried Martens Centre for European Studies on how “Brussels is About to Pr...

22 Huhti 202410min

AI lab hot item | MEP Axel Voss: In Search of Pragmatic Solutions for AI Devs & the Creative Sector

AI lab hot item | MEP Axel Voss: In Search of Pragmatic Solutions for AI Devs & the Creative Sector

🔥 In this 'Hot Item', MEP Axel Voss (Germany, EPP) & the AI lab discuss his intentions to bring the creative industry and AI developers around the table in mid-April for a first exchange to gain a be...

28 Maalis 202410min

AI lab TL;DR | Nuno Sousa e Silva - Are AI Models’ Weights Protected Databases?

AI lab TL;DR | Nuno Sousa e Silva - Are AI Models’ Weights Protected Databases?

🔍 In this TL;DR episode, Assistant Professor Nuno Sousa e Silva (Universidade Católica Portuguesa) discusses his recent Kluwer Copyright Blog contribution, “Are AI Models’ Weights Protected Databases...

12 Maalis 202410min

1:1 with Pamela Samuelson

1:1 with Pamela Samuelson

In this podcast Pamela Samuelson (UC Berkeley School of Law) & the AI lab ‘decrypt’ Artificial Intelligence from a policy making point of view📌Episode Highlights⏲️[00:00] Intro⏲️[02:59] Q1 - The Deep...

17 Tammi 202438min

1:1 with Andres Guadamuz

1:1 with Andres Guadamuz

In this podcast Andres Guadamuz (University of Sussex) & the AI lab ‘decrypt’ Artificial Intelligence from a policy making point of view📌Episode Highlights⏲️[00:00] Intro⏲️[01:24] The TL;DR Perspecti...

23 Marras 202325min

AI lab hot item | Michiel Van Lerbeirghe (ML6) - Copyright Transparency: An AI Firm’s Perspective

AI lab hot item | Michiel Van Lerbeirghe (ML6) - Copyright Transparency: An AI Firm’s Perspective

🔥 In this 'Hot Item', Michiel Van Lerbeirghe (ML6) & the AI lab explore how the push for copyright transparency in the EU AI Act’s could impact smaller European AI providers and how we can move towar...

14 Marras 202310min

AI lab hot item | Brian Williamson (Communications Chambers) - Latest AI Policy Developments

AI lab hot item | Brian Williamson (Communications Chambers) - Latest AI Policy Developments

🔥 In this 'Hot Item', Brian Williamson (Communications Chambers) & the AI lab discuss the latest AI policy developments, from the U.K. AI Summit to the U.S. White House Executive Order on AI safety📌...

9 Marras 202313min

AI lab hot item | Kai Zenner (European Parliament) - EU AI Act Trilogue: The Focus Points

AI lab hot item | Kai Zenner (European Parliament) - EU AI Act Trilogue: The Focus Points

🔥 In this 'Hot Item', Kai Zenner (Head of Office & Digital Policy Adviser for MEP Axel Voss) & the AI lab discuss the state-of-play of the EU AI Act trilogue negotiations📌Hot Item Highlights⏲️[00:00...

19 Loka 202313min

Suosittua kategoriassa Tiede

tiedekulma-podcast
rss-mita-tulisi-tietaa
mielipaivakirja
rss-poliisin-mieli
rss-duodecim-lehti
utelias-mieli
radio-antro
menologeja-tutkimusmatka-vaihdevuosiin
rss-metsa
rss-tiedetta-vai-tarinaa
rss-sosiopodi