AI lab TL;DR | Anna Tumadóttir - Rethinking Creator Consent in the Age of AI

AI lab TL;DR | Anna Tumadóttir - Rethinking Creator Consent in the Age of AI

🔍 In this TL;DR episode, Anna Tumadóttir (Creative Commons) discusses how the evolution of creator consent and AI has reshaped perspectives on openness, highlighting the challenges of balancing creator choice with the risks of misuse. Examines the limitations of blunt opt-out mechanisms like those in the EU AI Act, the implications for marginalized communities and open access, and explores the need for nuanced preference signals to preserve openness while respecting creators' intentions in the age of generative AI.


📌 TL;DR Highlights

⏲️[00:00] Intro

⏲️[00:38] Q1-How has the conversation around creator consent and AI evolved in the past few years?

⏲️[05:42] Q2-How has the tension between openness and creator choice played out so far, and what lessons can be learned from this tension?

⏲️[11:02] Q3-can we ensure that marginalized or underrepresented communities maintain agency over their contributions to the commons?

⏲️[19:03] Wrap-up & Outro


💭 Q1 - How has the conversation around creator consent and AI evolved in the past few years?


🗣️ "When AI became mainstream, most creators couldn’t have anticipated how their works might one day be used by machines—some uses align with their intentions, but others do not."

🗣️ "Individuals want more choice over how their work is used, and without tailored options, some are considering paywalls or not publishing at all."

🗣️ "The EU AI Act’s opt-out mechanism is a blunt instrument—it’s just a 'no,' not a nuanced reflection of creators’ varied preferences."

🗣️ "Creators may object to large companies using their works for AI training but be fine with nonprofits or research-focused uses, showing the need for more nuanced tools."

🗣️ "We’re focusing on developing 'preference signals'—mechanisms that let creators communicate specific preferences for how their work is used in AI models."


💭 Q2 - How has the tension between openness and creator choice played out so far, and what lessons can be learned from this tension?


🗣️ "Scientists and researchers who traditionally embraced open access are now reconsidering, fearing that commercial AI providers are exploiting their work."

🗣️ "Creators who once freely shared their work under CC licenses are now hesitant, either because they misunderstand AI training risks or feel exposed."

🗣️ "The worst outcome of this tension is less openness overall—creators retreating behind paywalls or choosing not to publish at all."

🗣️ "A perception persists in the open-source AI community that CC-licensed works are 'safe' to use, but creators’ motivations for sharing openly years ago don’t always align with today’s AI landscape."

🗣️ "To preserve openness while respecting creator intentions, we need mechanisms that enable a 'no unless' approach—minimizing restrictions while maximizing use."


💭 Q3 - can we ensure that marginalized or underrepresented communities maintain agency over their contributions to the commons?


🗣️ "Generative AI amplifies existing inequalities because it demands infrastructure like electricity, internet, and computing power—resources many regions lack."

🗣️ "Even if everyone had equal internet access, a one-size-fits-all approach to technology wouldn’t work due to local contexts and different needs."

🗣️ "Traditional knowledge should be exempt from broad data mining rights, allowing communities to explicitly give or revoke permissions for its use in AI training."

🗣️ "We need public AI infrastructures that ensure diversity and regional perspectives while maintaining communities’ agency over their contributions."

🗣️ "To prevent lopsided development, policies must go beyond tools like preference signals and address broader governance and societal frameworks."


📌 About Our Guest

🎙️ Anna Tumadóttir | Creative Commons

🌐 Article | Questions for Consideration on AI & the Commons

https://creativecommons.org/2024/07/24/preferencesignals/

🌐 Anna Tumadóttir

https://creativecommons.org/person/annacreativecommons-org/


Anna is the CEO of Creative Commons, an international nonprofit organization that empowers people to grow and sustain the thriving commons of shared knowledge and culture.

#AI #ArtificialIntelligence #GenerativeAI

Episoder(37)

 AI lab TL;DR | Carys J. Craig - The Copyright Trap and AI Policy

AI lab TL;DR | Carys J. Craig - The Copyright Trap and AI Policy

🔍 In this TL;DR episode, Carys J Craig (Osgoode Professional Development) explains the "copyright trap" in AI regulation, where relying on copyright favors corporate interests over creativity. She ch...

27 Jan 202529min

AI lab TL;DR | Ariadna Matas - Should Institutions Enable or Prevent Cultural Data Mining?

AI lab TL;DR | Ariadna Matas - Should Institutions Enable or Prevent Cultural Data Mining?

🔍 In this TL;DR episode, Ariadna Matas (Europeana Foundation) discusses how the 2019 Copyright Directive has influenced text and data mining practices in cultural heritage institutions, highlighting ...

13 Jan 202516min

AI lab TL;DR | Martin Senftleben - How Copyright Challenges AI Innovation and Creativity

AI lab TL;DR | Martin Senftleben - How Copyright Challenges AI Innovation and Creativity

🔍 In this TL;DR episode, Martin Senftleben (Institute for Information Law (IViR) & University of Amsterdam) discusses how EU regulations, including the AI Act and copyright frameworks, impose heavy b...

16 Des 202410min

AI lab TL;DR | Mark Lemley - How Generative AI Disrupts Traditional Copyright Law

AI lab TL;DR | Mark Lemley - How Generative AI Disrupts Traditional Copyright Law

🔍 In this TL;DR episode, Mark Lemley (Stanford Law School) discusses how generative AI challenges traditional copyright doctrines, such as the idea-expression dichotomy and substantial similarity tes...

25 Nov 20248min

AI lab TL;DR | Jacob Mchangama - Are AI Chatbot Restrictions Threatening Free Speech?

AI lab TL;DR | Jacob Mchangama - Are AI Chatbot Restrictions Threatening Free Speech?

🔍 In this TL;DR episode, Jacob Mchangama (The Future of Free Speech & Vanderbilt University) discusses the high rate of AI chatbot refusals to generate content for controversial prompts, examining ho...

4 Nov 202416min

AI lab TL;DR | Jurgen Gravestein - The Intelligence Paradox

AI lab TL;DR | Jurgen Gravestein - The Intelligence Paradox

🔍 In this TL;DR episode, Jurgen Gravestein (Conversation Design Institute) discusses his Substack blog post delving into the ‘Intelligence Paradox’ with the AI lab📌 TL;DR Highlights⏲️[00:00] Intro⏲️...

21 Okt 202414min

AI lab TL;DR | Stefaan G. Verhulst - Are we entering a Data Winter?

AI lab TL;DR | Stefaan G. Verhulst - Are we entering a Data Winter?

🔍 In this TL;DR episode, Dr. Stefaan G. Verhulst (The GovLab & The Data Tank) discusses his Frontiers Policy Labs contribution on the urgent need to preserve data access for the public interest with ...

30 Sep 202412min

Populært innen Vitenskap

fastlegen
tingenes-tilstand
rekommandert
rss-nysgjerrige-norge
forskningno
liberal-halvtime
sinnsyn
vett-og-vitenskap-med-gaute-einevoll
smart-forklart
fjellsportpodden
rss-rekommandert
pod-britannia
jss
villmarksliv
nevropodden
rss-overskuddsliv
rss-radium
hva-er-greia-med
kvinnehelsepodden
rss-bondevennen