AI lab TL;DR | Carys J. Craig - The Copyright Trap and AI Policy

AI lab TL;DR | Carys J. Craig - The Copyright Trap and AI Policy

🔍 In this TL;DR episode, Carys J Craig (Osgoode Professional Development) explains the "copyright trap" in AI regulation, where relying on copyright favors corporate interests over creativity. She challenges misconceptions about copying and property rights, showing how this approach harms innovation and access. Carys offers alternative ways to protect human creativity without falling into this trap.


📌 TL;DR Highlights

⏲️[00:00] Intro

⏲️[00:46] Q1-What is the "Copyright Trap," and why could it harm AI and creativity?

⏲️[10:05] Q2-Can you explain the three routes that lead into the copyright trap and their relevance to AI?

⏲️[22:08] Q3-What alternatives should policymakers consider to protect creators and manage AI?

⏲️[28:45] Wrap-up & Outro


💭 Q1 - What is the "Copyright Trap," and why could it harm AI and creativity?


🗣️ “To turn to copyright law is to turn to really a false friend. The idea that copyright is going to be our friend, is going to help us in this situation,(...) it's likely to do more harm than good."

🗣️ “We are imagining increasingly in these policy debates that copyright and protection of copyright owners will be a kind of counterweight to corporate power and to the sort of extractive logics of Big Tech and AI development. I think that that is misguided. And in fact, we're playing into the interests of both the entertainment industries and big tech ”

🗣️ "When we run into the copyright trap, this sort of conviction that copyright is going to be the right regulatory tool, we are sort of defining how this technology is going to evolve in a way that I think will backfire and will actually undermine the political objectives of those who are pointing to the inequities and the unfairness behind the technology and the way that it's being developed.”

🗣️ "AI industry, big tech industry and the creative industry stakeholders are all, I think, perfectly happy to approach these larger policy questions through the sort of logic of copyright, sort of proprietary logic of ownership, control, exchange in the free market, licencing structures that we're already seeing taking hold"

🗣️ "What we're going to see, I think, if we run into the copyright trap is that certainly smaller developers, but really everyone will be training the technology on incomplete data sets, the data sets that reflect the sort of big packaged data products that have been exchanged for value between the main market actors. So that's going to lessen the quality really of what's going in generally by making it more exclusive and less inclusive."


💭 Q2 - Can you explain the three routes that lead into the copyright trap and their relevance to AI?


🗣️ ""The first route that I identify is what's sometimes called the if-value-then-right fallacy. So that's the assumption that if something has value, then there should be or must be some right over it.“

🗣️ "Because something has value, whether economic or social, doesn't mean we should turn it into property that can be owned and controlled through these exclusive rights that we find in copyright law."

🗣️ "The second route that I identify is a sort of obsession with copying and the idea that copying is inherently just a wrongful activity. (...) The reality is that there's nothing inherently wrongful about copying. And in fact, this is how we learn. This is how we create.

🗣️ "One of the clearest routes into the copyright trap is saying, well, you know, you have to make copies of texts in order to train AI. So of course, copyright is implicated. And of course, we have to prevent that from happening without permission.. (...) But our obsession with the individuated sort of discrete copies of works behind the scenes is now an anachronism that we really need to let go.”

🗣️ "Using the figure of the artist as a reason to expand copyright control, and assuming that that's going to magically turn into lining the pockets of artists and creators seems to me to be a fallacy and a route into the copyright trap."


💭 Q3 - Why is output-based remuneration better for creators, AI developers, and society?


🗣️ "The health of our cultural environment (..) [should be] the biggest concern and not simply or only protecting creators as a separate class of sort of professional actors."

🗣️ "I think what we could do is shift our copyright policy focus to protecting and encouraging human authorship by refusing to protect AI generated outputs.

🗣️ "If the outputs of generative AI are substantially similar to works on which the AI was trained, then those are infringing outputs and copyright law will apply to them such that to distribute those infringing copies would produce liability under the system as it currently exists.“

🗣️ "There are privacy experts who might be much better placed to say how should we curate or ensure that we regulate the data on which the machines are trained and I would be supportive of those kinds of interventions at the input stage.

🗣️ “Copyright seems like a tempting way to do it but that's not what it does. And so maybe rather than some of the big collective licencing solutions that are being imagined in this context, we'd be better off thinking about tax solutions, where we properly tax big tech and then we use that tax in a way that actually supports the things that we as a society care about, including funding culture and the arts."


📌 About Our Guest

🎙️ Carys J Craig | Osgoode Hall Law School

🌐 Article | The AI-Copyright Trap

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4905118

🌐 Carys J Craig

https://www.osgoode.yorku.ca/faculty-and-staff/craig-carys-j/


Carys is the Academic Director of the Osgoode Professional Development LLM Program in Intellectual Property Law, and recently served as Osgoode’s Associate Dean. A recipient of multiple teaching awards, Carys researches and publishes widely on intellectual property law and policy, with an emphasis on authorship, users’ rights and the public domain.

#AI #ArtificialIntelligence #GenerativeAI

Jaksot(37)

1:1 with Brigitte Vézina

1:1 with Brigitte Vézina

In this podcast Brigitte Vézina (Creative Commons) & the AI lab ‘decrypt’ Artificial Intelligence from a policy making point of view📌Episode Highlights⏲️[00:00] Intro⏲️[00:58] The TL;DR Perspective⏲️[09:45] Q1 - The Deepdive: AI Decrypted | Creative Commons pointed out the link between AI and free and open source software (FOSS), highlighting opportunities and threats. Can you explain this?⏲️[15:50] Q2 - The Deepdive: AI Decrypted | Contrary to the original proposal, copyright rules related to transparency and possibly content moderation have been proposed in the AI Act. Is this necessary?⏲️[21:58] Q3 - The Deepdive: AI Decrypted | Creative Commons states that using copyright to govern AI is unwise, as it contradicts copyright’s primordial function of allowing human creativity to flourish. What do you mean by that?⏲️[29:07] Outro🗣️ [Article] 28b 4 (c) (...) is ambiguous (...). We need to find a way to achieve the EU AI Act's aim to really increase transparency, but without placing an undue and unreasonable burden on AI developers.🗣️ Balance is key: there needs to be appropriate limits on copyright protection, if we want the copyright system to fulfil its function of both incentivising creativity and providing access to knowledge. That is the current framework in the EU with the DSM Directive.🗣️  What we've heard time and again through our consultations: copyright is really just one lens through which we can consider AI, and often copyright is not the right tool to regulate [AI].🗣️ Copyright is a rather blunt tool that often leads to either black and white or all or nothing solutions.That is dangerous.📌About Our Guest🎙️ Brigitte Vézina | Director of Policy & Open Culture, Creative Commons  𝕏  https://x.com/Brigitte_Vezina🌐 Supporting Open Source and Open Science in the EU AI Act | Creative Commons🌐 European Parliament Gives Green Light to AI Act, Moving EU Towards Finalizing the World’s Leading Regulation of AI | Creative Commons🌐 Exploring Preference Signals for AI Training | Creative Commons🌐 Update and Next Steps on CC’s AI Community Consultation | Creative Commons🌐 Better Sharing for Generative AI  | Creative Commons🌐 AI Blog Posts | Creative Commons🌐 Open Culture Voices | Creative Commons🌐 Spawning AI 🌐 Brigitte VézinaBrigitte Vézina is Director of Policy and Open Culture at Creative Commons (CC). She is passionate about all things spanning culture, arts, handicraft, traditions, fashion and, of course, copyright law and policy. Brigitte gets a kick out of tackling the fuzzy legal and policy issues that stand in the way of access, use, re-use and remix of culture, information and knowledge. Before joining CC, she worked for a decade as a legal officer at WIPO and then ran her own consultancy, advising Europeana, SPARC Europe and others on copyright matters. Brigitte is a fellow at the Canadian think tank Centre for International Governance Innovation (CIGI).

17 Loka 202329min

1:1 with Teresa Nobre

1:1 with Teresa Nobre

In this podcast Teresa Nobre (COMMUNIA) & the AI lab ‘decrypt’ Artificial Intelligence from a policy making point of view📌Episode Highlights⏲️[00:00] Intro⏲️[01:10] The TL;DR Perspective⏲️[08:11] Q1 - The Deepdive: AI Decrypted | COMMUNIA’s Policy Paper #15 states that: “The use of copyrighted works as part of the training data is exactly the type of use that was foreseen when the TDM exception was drafted and this has recently been confirmed by the EC in response to a parliamentary question”. Can you clarify that?⏲️[11:04] Q2 - The Deepdive: AI Decrypted | COMMUNIA clearly favours transparency when it comes to AI models, but also points out that when it comes to copyrighted material: “Policy makers should not forget that the copyright ecosystem itself suffers from a lack of transparency”. What do you mean by that?⏲️[16:41] Q3 - The Deepdive: AI Decrypted | COMMUNIA sees a need to operationalise the TDM opt-out mechanism. You recommend the EC should play an active role to encourage a fair and balanced approach to opt-out and transparency through a broad stakeholder dialogue. What could that entail?⏲️[21:39] Outro🗣️ Everything would be easier if there was more transparency across the copyright ecosystem itself. (...)  There's no place that you can consult that will tell you who are the owners, who are the creators, the title of the work.🗣️ Machine learning developers (...) will not be able to provide this [copyright] information, because this information is simply not publicly available for the vast majority of works.🗣️ To demonstrate compliance with copyright law, machine learning developers only need to show that they have respected machine-readable rights reservations.🗣️ Our recommendation: European Commission, do something that's more towards involving everyone in the solution to the problem.📌About Our Guest🎙️ Teresa Nobre | Legal Director, COMMUNIA  𝕏  https://twitter.com/tenobre🌐 The AI Act and the quest for transparency (COMMUNIA blog post)🌐 Policy Paper #15 on Using Copyrighted Works for Teaching the Machine (COMMUNIA)🌐 Answer given by Commissioner Thierry Breton on behalf of the European Commission to the Parliamentary Question by MEP Emmanuel Maurel (The Left, France)🌐 Teresa NobreTeresa Nobre is the Legal Director of COMMUNIA, an international association that advocates for policies that expand the Public Domain and increase access to and reuse of culture and knowledge. She is an attorney-at-law and is involved in policy work both at the EU level and at the international level, representing COMMUNIA at the World Intellectual Property Organization.

27 Syys 202322min

1:1 with João Pedro Quintais

1:1 with João Pedro Quintais

In this podcast João Pedro Quintais (Institute for Information Law, IViR) & the AI lab ‘decrypt’ Artificial Intelligence from a policy making point of view📌Episode Highlights⏲️[00:00] Intro⏲️[00:00] The TL;DR Perspective⏲️[00:00] Q1 - The Deepdive: AI Decrypted | You consider that it is impossible to comply with the transparency obligation to “document and make publicly available a summary of the use of training data protected under copyright law”. Can you explain why?⏲️[00:00] Q2 - The Deepdive: AI Decrypted | Can you explain: 1. how you connect the TDM exceptions in the Directive on Copyright in the Digital Single Market to the EU AI Act; and, 2. your views on the different schools of thoughts on this?⏲️[00:00] Q3 - The Deepdive: AI Decrypted | You refer to the user safeguards  in Article 17 of the Directive on Copyright in the Digital Single Market, e.g. exceptions and freedom of speech. Where do you make the link with the EU AI Act?⏲️[00:00] Outro📌About Our Guest🎙️ Dr João Pedro Quintais | Assistant Professor, Institute for Information Law (IViR)🐦 https://twitter.com/JPQuintais🌐 Kluwer Copyright Blog | Generative AI, Copyright and the AI Act🌐 Institute for Information Law (IViR)🌐 Dr João Pedro QuintaisDr João Pedro Quintais is Assistant Professor at the University of Amsterdam’s Law School, in the Institute for Information Law (IViR). João notably studies how intellectual property law applies to new technologies and the implications of copyright law and its enforcement by algorithms on the rights and freedoms of Internet users, on the remuneration of creators, and on technological development. João is also Co-Managing Editor of the widely read Kluwer Copyright Blog and has published extensively in the area of information law.

6 Syys 202322min

1:1 with Alina Trapova

1:1 with Alina Trapova

In this podcast Alina Trapova (UCL Faculty of Laws) & the AI lab ‘decrypt’ Artificial Intelligence from a policy making point of view📌Episode Highlights⏲️[00:00] Intro⏲️[01:07] The TL;DR Perspective⏲️[10:05] Q1 - The Deepdive: AI Decrypted | You consider that the way copyright relevant legislation is looked at currently by EU legislators benefits only a limited number of cultural industries. Can you expand on that?⏲️[17:34] Q2 - The Deepdive: AI Decrypted | In the AI Act, the EP slid Article 28b sub 4c, relating to transparency. Knowing that it is not that obvious to identify what is copyrighted and what isn’t, do you think this can even be done?⏲️[22:31] Q3 - The Deepdive: AI Decrypted | You encourage legislators to be cautious when looking at regulating an emerging digital technology. Where do you see a risk of using an elephant gun to kill a fly?⏲️[26:47] Outro📌About Our Guest🎙️ Dr Alina Trapova | Lecturer in Intellectual Property Law, UCL Faculty of Laws🐦 https://twitter.com/alinatrapova🌐 European Parliament AI Act Position Put to Vote on 14 June, 2023🌐 European Law Review [(2023) 48] | Copyright for AI-Generated Works: A Task for the Internal Market?🌐 Kluwer Copyright Blog | Copyright for AI-Generated Works: A Task for the Internal Market?🌐 Institute of Brand and Innovation Law (UCL, University College London)🌐 Dr Alina TrapovaDr Alina Trapova is a Lecturer in Intellectual Property Law at University College London (UCL) and a Co-Director of the Institute for Brand and Innovation Law. Alina is one of the Co-Managing editors at the Kluwer Copyright Blog. Prior to UCL, she worked as an Assistant Professor in Autonomous Systems and Law at The University of Nottingham (UK) and Bocconi University (Italy). Before joining academia, she has worked in private practice, as well as the EU Intellectual Property Office (EUIPO) and the International Federation for the Phonographic Industry (IFPI).

15 Kesä 202328min

AI lab - Teaser

AI lab - Teaser

The AI lab podcast will be launched in June 2023 with the aim of "decrypting" expert analysis to understand Artificial Intelligence from a policy making point of view.

1 Kesä 202334s

Suosittua kategoriassa Tiede

rss-mita-tulisi-tietaa
rss-poliisin-mieli
utelias-mieli
tiedekulma-podcast
rss-lihavuudesta-podcast
rss-duodecim-lehti
docemilia
rss-ylistys-elaimille
mielipaivakirja
rss-tiedetta-vai-tarinaa
menologeja-tutkimusmatka-vaihdevuosiin
rss-ranskaa-raakana
rss-vaasan-yliopiston-podcastit
rss-radplus
rss-jyvaskylan-yliopisto
rss-tervetta-skeptisyytta