
1:1 with Brigitte Vézina
In this podcast Brigitte Vézina (Creative Commons) & the AI lab ‘decrypt’ Artificial Intelligence from a policy making point of view📌Episode Highlights⏲️[00:00] Intro⏲️[00:58] The TL;DR Perspective⏲️[09:45] Q1 - The Deepdive: AI Decrypted | Creative Commons pointed out the link between AI and free and open source software (FOSS), highlighting opportunities and threats. Can you explain this?⏲️[15:50] Q2 - The Deepdive: AI Decrypted | Contrary to the original proposal, copyright rules related to transparency and possibly content moderation have been proposed in the AI Act. Is this necessary?⏲️[21:58] Q3 - The Deepdive: AI Decrypted | Creative Commons states that using copyright to govern AI is unwise, as it contradicts copyright’s primordial function of allowing human creativity to flourish. What do you mean by that?⏲️[29:07] Outro🗣️ [Article] 28b 4 (c) (...) is ambiguous (...). We need to find a way to achieve the EU AI Act's aim to really increase transparency, but without placing an undue and unreasonable burden on AI developers.🗣️ Balance is key: there needs to be appropriate limits on copyright protection, if we want the copyright system to fulfil its function of both incentivising creativity and providing access to knowledge. That is the current framework in the EU with the DSM Directive.🗣️ What we've heard time and again through our consultations: copyright is really just one lens through which we can consider AI, and often copyright is not the right tool to regulate [AI].🗣️ Copyright is a rather blunt tool that often leads to either black and white or all or nothing solutions.That is dangerous.📌About Our Guest🎙️ Brigitte Vézina | Director of Policy & Open Culture, Creative Commons 𝕏 https://x.com/Brigitte_Vezina🌐 Supporting Open Source and Open Science in the EU AI Act | Creative Commons🌐 European Parliament Gives Green Light to AI Act, Moving EU Towards Finalizing the World’s Leading Regulation of AI | Creative Commons🌐 Exploring Preference Signals for AI Training | Creative Commons🌐 Update and Next Steps on CC’s AI Community Consultation | Creative Commons🌐 Better Sharing for Generative AI | Creative Commons🌐 AI Blog Posts | Creative Commons🌐 Open Culture Voices | Creative Commons🌐 Spawning AI 🌐 Brigitte VézinaBrigitte Vézina is Director of Policy and Open Culture at Creative Commons (CC). She is passionate about all things spanning culture, arts, handicraft, traditions, fashion and, of course, copyright law and policy. Brigitte gets a kick out of tackling the fuzzy legal and policy issues that stand in the way of access, use, re-use and remix of culture, information and knowledge. Before joining CC, she worked for a decade as a legal officer at WIPO and then ran her own consultancy, advising Europeana, SPARC Europe and others on copyright matters. Brigitte is a fellow at the Canadian think tank Centre for International Governance Innovation (CIGI).
17 Loka 202329min

1:1 with Teresa Nobre
In this podcast Teresa Nobre (COMMUNIA) & the AI lab ‘decrypt’ Artificial Intelligence from a policy making point of view📌Episode Highlights⏲️[00:00] Intro⏲️[01:10] The TL;DR Perspective⏲️[08:11] Q1 - The Deepdive: AI Decrypted | COMMUNIA’s Policy Paper #15 states that: “The use of copyrighted works as part of the training data is exactly the type of use that was foreseen when the TDM exception was drafted and this has recently been confirmed by the EC in response to a parliamentary question”. Can you clarify that?⏲️[11:04] Q2 - The Deepdive: AI Decrypted | COMMUNIA clearly favours transparency when it comes to AI models, but also points out that when it comes to copyrighted material: “Policy makers should not forget that the copyright ecosystem itself suffers from a lack of transparency”. What do you mean by that?⏲️[16:41] Q3 - The Deepdive: AI Decrypted | COMMUNIA sees a need to operationalise the TDM opt-out mechanism. You recommend the EC should play an active role to encourage a fair and balanced approach to opt-out and transparency through a broad stakeholder dialogue. What could that entail?⏲️[21:39] Outro🗣️ Everything would be easier if there was more transparency across the copyright ecosystem itself. (...) There's no place that you can consult that will tell you who are the owners, who are the creators, the title of the work.🗣️ Machine learning developers (...) will not be able to provide this [copyright] information, because this information is simply not publicly available for the vast majority of works.🗣️ To demonstrate compliance with copyright law, machine learning developers only need to show that they have respected machine-readable rights reservations.🗣️ Our recommendation: European Commission, do something that's more towards involving everyone in the solution to the problem.📌About Our Guest🎙️ Teresa Nobre | Legal Director, COMMUNIA 𝕏 https://twitter.com/tenobre🌐 The AI Act and the quest for transparency (COMMUNIA blog post)🌐 Policy Paper #15 on Using Copyrighted Works for Teaching the Machine (COMMUNIA)🌐 Answer given by Commissioner Thierry Breton on behalf of the European Commission to the Parliamentary Question by MEP Emmanuel Maurel (The Left, France)🌐 Teresa NobreTeresa Nobre is the Legal Director of COMMUNIA, an international association that advocates for policies that expand the Public Domain and increase access to and reuse of culture and knowledge. She is an attorney-at-law and is involved in policy work both at the EU level and at the international level, representing COMMUNIA at the World Intellectual Property Organization.
27 Syys 202322min

1:1 with João Pedro Quintais
In this podcast João Pedro Quintais (Institute for Information Law, IViR) & the AI lab ‘decrypt’ Artificial Intelligence from a policy making point of view📌Episode Highlights⏲️[00:00] Intro⏲️[00:00] The TL;DR Perspective⏲️[00:00] Q1 - The Deepdive: AI Decrypted | You consider that it is impossible to comply with the transparency obligation to “document and make publicly available a summary of the use of training data protected under copyright law”. Can you explain why?⏲️[00:00] Q2 - The Deepdive: AI Decrypted | Can you explain: 1. how you connect the TDM exceptions in the Directive on Copyright in the Digital Single Market to the EU AI Act; and, 2. your views on the different schools of thoughts on this?⏲️[00:00] Q3 - The Deepdive: AI Decrypted | You refer to the user safeguards in Article 17 of the Directive on Copyright in the Digital Single Market, e.g. exceptions and freedom of speech. Where do you make the link with the EU AI Act?⏲️[00:00] Outro📌About Our Guest🎙️ Dr João Pedro Quintais | Assistant Professor, Institute for Information Law (IViR)🐦 https://twitter.com/JPQuintais🌐 Kluwer Copyright Blog | Generative AI, Copyright and the AI Act🌐 Institute for Information Law (IViR)🌐 Dr João Pedro QuintaisDr João Pedro Quintais is Assistant Professor at the University of Amsterdam’s Law School, in the Institute for Information Law (IViR). João notably studies how intellectual property law applies to new technologies and the implications of copyright law and its enforcement by algorithms on the rights and freedoms of Internet users, on the remuneration of creators, and on technological development. João is also Co-Managing Editor of the widely read Kluwer Copyright Blog and has published extensively in the area of information law.
6 Syys 202322min

1:1 with Alina Trapova
In this podcast Alina Trapova (UCL Faculty of Laws) & the AI lab ‘decrypt’ Artificial Intelligence from a policy making point of view📌Episode Highlights⏲️[00:00] Intro⏲️[01:07] The TL;DR Perspective⏲️[10:05] Q1 - The Deepdive: AI Decrypted | You consider that the way copyright relevant legislation is looked at currently by EU legislators benefits only a limited number of cultural industries. Can you expand on that?⏲️[17:34] Q2 - The Deepdive: AI Decrypted | In the AI Act, the EP slid Article 28b sub 4c, relating to transparency. Knowing that it is not that obvious to identify what is copyrighted and what isn’t, do you think this can even be done?⏲️[22:31] Q3 - The Deepdive: AI Decrypted | You encourage legislators to be cautious when looking at regulating an emerging digital technology. Where do you see a risk of using an elephant gun to kill a fly?⏲️[26:47] Outro📌About Our Guest🎙️ Dr Alina Trapova | Lecturer in Intellectual Property Law, UCL Faculty of Laws🐦 https://twitter.com/alinatrapova🌐 European Parliament AI Act Position Put to Vote on 14 June, 2023🌐 European Law Review [(2023) 48] | Copyright for AI-Generated Works: A Task for the Internal Market?🌐 Kluwer Copyright Blog | Copyright for AI-Generated Works: A Task for the Internal Market?🌐 Institute of Brand and Innovation Law (UCL, University College London)🌐 Dr Alina TrapovaDr Alina Trapova is a Lecturer in Intellectual Property Law at University College London (UCL) and a Co-Director of the Institute for Brand and Innovation Law. Alina is one of the Co-Managing editors at the Kluwer Copyright Blog. Prior to UCL, she worked as an Assistant Professor in Autonomous Systems and Law at The University of Nottingham (UK) and Bocconi University (Italy). Before joining academia, she has worked in private practice, as well as the EU Intellectual Property Office (EUIPO) and the International Federation for the Phonographic Industry (IFPI).
15 Kesä 202328min

AI lab - Teaser
The AI lab podcast will be launched in June 2023 with the aim of "decrypting" expert analysis to understand Artificial Intelligence from a policy making point of view.
1 Kesä 202334s

















