AI lab TL;DR | Anna Tumadóttir - Rethinking Creator Consent in the Age of AI

AI lab TL;DR | Anna Tumadóttir - Rethinking Creator Consent in the Age of AI

🔍 In this TL;DR episode, Anna Tumadóttir (Creative Commons) discusses how the evolution of creator consent and AI has reshaped perspectives on openness, highlighting the challenges of balancing creator choice with the risks of misuse. Examines the limitations of blunt opt-out mechanisms like those in the EU AI Act, the implications for marginalized communities and open access, and explores the need for nuanced preference signals to preserve openness while respecting creators' intentions in the age of generative AI.


📌 TL;DR Highlights

⏲️[00:00] Intro

⏲️[00:38] Q1-How has the conversation around creator consent and AI evolved in the past few years?

⏲️[05:42] Q2-How has the tension between openness and creator choice played out so far, and what lessons can be learned from this tension?

⏲️[11:02] Q3-can we ensure that marginalized or underrepresented communities maintain agency over their contributions to the commons?

⏲️[19:03] Wrap-up & Outro


💭 Q1 - How has the conversation around creator consent and AI evolved in the past few years?


🗣️ "When AI became mainstream, most creators couldn’t have anticipated how their works might one day be used by machines—some uses align with their intentions, but others do not."

🗣️ "Individuals want more choice over how their work is used, and without tailored options, some are considering paywalls or not publishing at all."

🗣️ "The EU AI Act’s opt-out mechanism is a blunt instrument—it’s just a 'no,' not a nuanced reflection of creators’ varied preferences."

🗣️ "Creators may object to large companies using their works for AI training but be fine with nonprofits or research-focused uses, showing the need for more nuanced tools."

🗣️ "We’re focusing on developing 'preference signals'—mechanisms that let creators communicate specific preferences for how their work is used in AI models."


💭 Q2 - How has the tension between openness and creator choice played out so far, and what lessons can be learned from this tension?


🗣️ "Scientists and researchers who traditionally embraced open access are now reconsidering, fearing that commercial AI providers are exploiting their work."

🗣️ "Creators who once freely shared their work under CC licenses are now hesitant, either because they misunderstand AI training risks or feel exposed."

🗣️ "The worst outcome of this tension is less openness overall—creators retreating behind paywalls or choosing not to publish at all."

🗣️ "A perception persists in the open-source AI community that CC-licensed works are 'safe' to use, but creators’ motivations for sharing openly years ago don’t always align with today’s AI landscape."

🗣️ "To preserve openness while respecting creator intentions, we need mechanisms that enable a 'no unless' approach—minimizing restrictions while maximizing use."


💭 Q3 - can we ensure that marginalized or underrepresented communities maintain agency over their contributions to the commons?


🗣️ "Generative AI amplifies existing inequalities because it demands infrastructure like electricity, internet, and computing power—resources many regions lack."

🗣️ "Even if everyone had equal internet access, a one-size-fits-all approach to technology wouldn’t work due to local contexts and different needs."

🗣️ "Traditional knowledge should be exempt from broad data mining rights, allowing communities to explicitly give or revoke permissions for its use in AI training."

🗣️ "We need public AI infrastructures that ensure diversity and regional perspectives while maintaining communities’ agency over their contributions."

🗣️ "To prevent lopsided development, policies must go beyond tools like preference signals and address broader governance and societal frameworks."


📌 About Our Guest

🎙️ Anna Tumadóttir | Creative Commons

🌐 Article | Questions for Consideration on AI & the Commons

https://creativecommons.org/2024/07/24/preferencesignals/

🌐 Anna Tumadóttir

https://creativecommons.org/person/annacreativecommons-org/


Anna is the CEO of Creative Commons, an international nonprofit organization that empowers people to grow and sustain the thriving commons of shared knowledge and culture.

#AI #ArtificialIntelligence #GenerativeAI

Jaksot(37)

AI lab TL;DR | Joan Barata - Transparency Obligations for All AI Systems

AI lab TL;DR | Joan Barata - Transparency Obligations for All AI Systems

🔍 In this TL;DR episode, Joan explains how Article 50 of the EU AI Act sets out high-level transparency obligations for AI developers and deployers—requiring users to be informed when they interact w...

10 Joulu 202517min

AI lab TL;DR | Aline Larroyed - The Fallacy Of The File

AI lab TL;DR | Aline Larroyed - The Fallacy Of The File

🔍 In this episode, Caroline and Alene unravel why the popular idea of “AI memorisation” leads policymakers down the wrong path—and how this metaphor obscures what actually happens inside large langua...

27 Marras 20257min

AI lab TL;DR | Anna Mills and Nate Angell - The Mirage of Machine Intelligence

AI lab TL;DR | Anna Mills and Nate Angell - The Mirage of Machine Intelligence

🔍 In this TL;DR episode, Anna and Nate unpack why calling AI outputs “hallucinations” misses the mark—and introduce “AI Mirage” as a sharper, more accurate metaphor. From scoring alternative terms to...

26 Touko 202520min

AI lab TL;DR | Emmie Hine - Can Europe Lead the Open-Source AI Race?

AI lab TL;DR | Emmie Hine - Can Europe Lead the Open-Source AI Race?

🔍 In this TL;DR episode, Emmie Hine (Yale Digital Ethics Center) makes the case for Europe’s leadership in open-source AI—thanks to strong infrastructure, multilingual data, and regulatory clarity. W...

12 Touko 202511min

AI lab TL;DR | Milton Mueller - Why Regulating AI Misses the Point

AI lab TL;DR | Milton Mueller - Why Regulating AI Misses the Point

🔍 In this TL;DR episode, Milton Mueller (the Georgia Institute of Technology School of Public Policy) argues that what we call “AI” is really just part of a broader digital ecosystem. Instead of vagu...

21 Huhti 202518min

AI lab TL;DR | Kevin Frazier - How Smarter Copyright Law Can Unlock Fairer AI

AI lab TL;DR | Kevin Frazier - How Smarter Copyright Law Can Unlock Fairer AI

🔍 In this TL;DR episode, Kevin Frazier (University of Texas at Austin school of Law) outlines a proposal to realign U.S. copyright law with its original goal of spreading knowledge. The discussion in...

7 Huhti 202516min

AI lab TL;DR | Paul Keller - A Vocabulary for Opting Out of AI Training and TDM

AI lab TL;DR | Paul Keller - A Vocabulary for Opting Out of AI Training and TDM

🔍 In this TL;DR episode, Paul Keller (The Open Future Foundation) outlines a proposal for a common opt-out vocabulary to improve how EU copyright rules apply to AI training. The discussion introduces...

24 Maalis 202515min

AI lab TL;DR |  João Pedro Quintais - Untangling AI Copyright and Data Mining in EU Compliance

AI lab TL;DR | João Pedro Quintais - Untangling AI Copyright and Data Mining in EU Compliance

🔍 In this TL;DR episode, João Quintais (Institute for Information Law) explains the interaction between the AI Act and EU copyright law, focusing on text and data mining (TDM). He unpacks key issues ...

3 Maalis 202525min

Suosittua kategoriassa Tiede

rss-mita-tulisi-tietaa
tiedekulma-podcast
utelias-mieli
rss-poliisin-mieli
rss-duodecim-lehti
radio-antro
rss-sosiopodi
rss-ylistys-elaimille
rss-lihavuudesta-podcast
docemilia
mielipaivakirja
filocast-filosofian-perusteet
menologeja-tutkimusmatka-vaihdevuosiin
rss-tiedetta-vai-tarinaa
rss-tervetta-skeptisyytta