AI lab TL;DR | Mark Lemley - How Generative AI Disrupts Traditional Copyright Law

AI lab TL;DR | Mark Lemley - How Generative AI Disrupts Traditional Copyright Law

🔍 In this TL;DR episode, Mark Lemley (Stanford Law School) discusses how generative AI challenges traditional copyright doctrines, such as the idea-expression dichotomy and substantial similarity test, and explores the evolving role of human creativity in the age of AI.

📌 TL;DR Highlights
⏲️[00:00] Intro
⏲️[00:54] Q1-How does genAI challenge traditional copyright doctrines and will this lead to an evolution of copyright?
⏲️[03:58] Q2-Can we expect new forms of legal recognition or protection for prompts?
⏲️[06:13] Q3-Are current copyright rules able to address authorship in genAI works or do we need new legal categories?
⏲️[08:00] Wrap-up & Outro

💭 Q1 - How does genAI challenge traditional copyright doctrines and will this lead to an evolution of copyright?

🗣️ "Copyright law has always tried to protect creative expression but is careful not to protect the idea behind a work."
🗣️ "Generative AI changes the normal economics and dynamics of creation by doing the hard work for us, like making the painting or doing the actual brushstrokes."
🗣️ "If copyright law doesn’t protect the expression created by AI rather than by a person, the question is, what, if anything, is there to copyright?"
🗣️ "Generative AI blows up the substantial similarity test because it’s unclear whether two similar works came from the same prompt or if the AI just made the same thing."
🗣️ "I might copy your prompt, input it into generative AI, and get a different output—making similarity no longer the evident marker of copying."

💭 Q2 - Can we expect new forms of legal recognition or protection for prompts?

🗣️ "We're still litigating whether the material generated by AI can be copyrighted, but we may ultimately say yes, as with photography 150 years ago."
🗣️ "Courts may get comfortable with the idea that structuring the prompt and iterating it is a form of creativity that leads to the final output."
🗣️ "In early photography, we gave copyright protection even though the machine made the image, because human judgment helped determine the outcome."
🗣️ "Prompt engineering could become more sophisticated, leading courts to see creativity in how prompts are structured and refined."
🗣️ "Sometimes I just ask a very simple question, and if that's all I contribute, I’m not sure there’s any protection."

💭 Q3 - Are current copyright rules able to address authorship in genAI works or do we need new legal categories?

🗣️ "There may be something around the creativity of prompts that will matter, but we're not there yet in terms of case law."
🗣️ "The assumption that 'I made a movie, I wrote text, so I get copyright in that work' is going to be called into question in the generative AI context."
🗣️ "Movie studios or video game companies that use AI to save money might be shocked when other people are free to copy AI-generated backgrounds."
🗣️ "Even if we get copyright protection for AI outputs, it will occupy a weird middle ground that feels different from what we’re used to."
🗣️ "There’s going to be pressure to change the law to make it align more with what copyright industries have been comfortable with, but it won’t be easy."

📌 About Our Guest
🎙️ Mark Lemley | Stanford Law School
🌐 Article | How Generative AI Turns Copyright Upside Down
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4517702
🌐 Mark Lemley
https://law.stanford.edu/mark-a-lemley/

Mark is William H. Neukom Professor of Law at Stanford Law School and the Director of the Stanford Program in Law, Science and Technology. He teaches intellectual property, patent law, trademark law, antitrust, the law of robotics and AI, video game law, and remedies and he is the author of 11 books and 218 articles.

Avsnitt(37)

AI lab TL;DR | Joan Barata - Transparency Obligations for All AI Systems

AI lab TL;DR | Joan Barata - Transparency Obligations for All AI Systems

🔍 In this TL;DR episode, Joan explains how Article 50 of the EU AI Act sets out high-level transparency obligations for AI developers and deployers—requiring users to be informed when they interact w...

10 Dec 202517min

AI lab TL;DR | Aline Larroyed - The Fallacy Of The File

AI lab TL;DR | Aline Larroyed - The Fallacy Of The File

🔍 In this episode, Caroline and Alene unravel why the popular idea of “AI memorisation” leads policymakers down the wrong path—and how this metaphor obscures what actually happens inside large langua...

27 Nov 20257min

AI lab TL;DR | Anna Mills and Nate Angell - The Mirage of Machine Intelligence

AI lab TL;DR | Anna Mills and Nate Angell - The Mirage of Machine Intelligence

🔍 In this TL;DR episode, Anna and Nate unpack why calling AI outputs “hallucinations” misses the mark—and introduce “AI Mirage” as a sharper, more accurate metaphor. From scoring alternative terms to...

26 Maj 202520min

AI lab TL;DR | Emmie Hine - Can Europe Lead the Open-Source AI Race?

AI lab TL;DR | Emmie Hine - Can Europe Lead the Open-Source AI Race?

🔍 In this TL;DR episode, Emmie Hine (Yale Digital Ethics Center) makes the case for Europe’s leadership in open-source AI—thanks to strong infrastructure, multilingual data, and regulatory clarity. W...

12 Maj 202511min

AI lab TL;DR | Milton Mueller - Why Regulating AI Misses the Point

AI lab TL;DR | Milton Mueller - Why Regulating AI Misses the Point

🔍 In this TL;DR episode, Milton Mueller (the Georgia Institute of Technology School of Public Policy) argues that what we call “AI” is really just part of a broader digital ecosystem. Instead of vagu...

21 Apr 202518min

AI lab TL;DR | Kevin Frazier - How Smarter Copyright Law Can Unlock Fairer AI

AI lab TL;DR | Kevin Frazier - How Smarter Copyright Law Can Unlock Fairer AI

🔍 In this TL;DR episode, Kevin Frazier (University of Texas at Austin school of Law) outlines a proposal to realign U.S. copyright law with its original goal of spreading knowledge. The discussion in...

7 Apr 202516min

AI lab TL;DR | Paul Keller - A Vocabulary for Opting Out of AI Training and TDM

AI lab TL;DR | Paul Keller - A Vocabulary for Opting Out of AI Training and TDM

🔍 In this TL;DR episode, Paul Keller (The Open Future Foundation) outlines a proposal for a common opt-out vocabulary to improve how EU copyright rules apply to AI training. The discussion introduces...

24 Mars 202515min

AI lab TL;DR |  João Pedro Quintais - Untangling AI Copyright and Data Mining in EU Compliance

AI lab TL;DR | João Pedro Quintais - Untangling AI Copyright and Data Mining in EU Compliance

🔍 In this TL;DR episode, João Quintais (Institute for Information Law) explains the interaction between the AI Act and EU copyright law, focusing on text and data mining (TDM). He unpacks key issues ...

3 Mars 202525min

Populärt inom Vetenskap

dumma-manniskor
svd-nyhetsartiklar
p3-dystopia
allt-du-velat-veta
kapitalet-en-podd-om-ekonomi
rss-ufo-bortom-rimligt-tvivel
rss-vetenskapsradion-2
det-morka-psyket
rss-vetenskapsradion
medicinvetarna
bildningspodden
sexet
rss-geopodden-2
vetenskapsradion
rss-experimentet
4health-med-anna-sparre
halsorevolutionen
rss-spraket
rss-arkeologi-historia-podden-som-graver-i-vart-kulturlandskap
hacka-livet