AI lab TL;DR | Joan Barata - Transparency Obligations for All AI Systems

AI lab TL;DR | Joan Barata - Transparency Obligations for All AI Systems

🔍 In this TL;DR episode, Joan explains how Article 50 of the EU AI Act sets out high-level transparency obligations for AI developers and deployers—requiring users to be informed when they interact with AI or access AI-generated content—while noting that excessive labeling can itself be misleading. She highlights why the forthcoming Code of Practice must focus on clear principles rather than fixed technical solutions, ensuring transparency helps prevent deception without creating confusion in a rapidly evolving technological environment.

📌 TL;DR Highlights

⏲️[00:00] Intro

⏲️[00:33] Q1-What’s the core purpose of Article 50, and why is this 10-month drafting window so critical for the industry?

⏲️[02:31] Q2-What’s the difference between disclosing a chatbot and technically marking AI-generated media?

⏲️[06:27] Q3-What is the inherent danger of "too much transparency" or over-labeling content? How do we prevent the "liar's dividend" and "label fatigue" while still fighting deception?

⏲️[10:00] Q4-If drafters should avoid one rigid technical fix, what’s your top advice for building flexibility into the Code of Practice?

⏲️[13:11] Q5-Did you consult other stakeholders when developing your whitepaper analysis?

⏲️[16:45] Wrap-up & Outro

💭 Q1 - What’s the core purpose of Article 50, and why is this 10-month drafting window so critical for the industry?

🗣️ “Article 50 sets only broad transparency rules—so a strong Code of Practice is essential.”

💭 Q2 - What’s the difference between disclosing a chatbot and technically marking AI-generated media?

🗣️ “If there’s a risk of confusion, users must be clearly told they’re interacting with AI.”

💭 Q3 - What is the inherent danger of "too much transparency" or over-labeling content? How do we prevent the "liar's dividend" and "label fatigue" while still fighting deception?

🗣️ “Too much transparency can mislead just as much as too little.”

💭 Q4 - If drafters should avoid one rigid technical fix, what’s your top advice for building flexibility into the Code of Practice?

🗣️ “We should focus on principles, not chase technical solutions that will be outdated in months.”

💭 Q5 - What is the one core idea you want policymakers to take away from your research?

🗣️ “Transparency raises legal, technical, psychological, and even philosophical questions—information alone doesn’t guarantee real agency."

📌 About Our Guests

🎙️ Joan Barata | Faculdade de Direito - Católica no Porto

🌐 linkedin.com/in/joan-barata-a649876

Joan Barata works on freedom of expression, media regulation, and intermediary liability issues. He is a Visiting professor at Faculdade de Direito - Católica no Porto and Senior Legal Fellow at The Future Free Speech project at Vanderbilt University. He is also a Fellow of the Program on Platform Regulation at the Stanford Cyber Policy Center.

#AI #artificialintelligence #generativeAI

Avsnitt(37)

AI lab TL;DR | Aline Larroyed - The Fallacy Of The File

AI lab TL;DR | Aline Larroyed - The Fallacy Of The File

🔍 In this episode, Caroline and Alene unravel why the popular idea of “AI memorisation” leads policymakers down the wrong path—and how this metaphor obscures what actually happens inside large langua...

27 Nov 20257min

AI lab TL;DR | Anna Mills and Nate Angell - The Mirage of Machine Intelligence

AI lab TL;DR | Anna Mills and Nate Angell - The Mirage of Machine Intelligence

🔍 In this TL;DR episode, Anna and Nate unpack why calling AI outputs “hallucinations” misses the mark—and introduce “AI Mirage” as a sharper, more accurate metaphor. From scoring alternative terms to...

26 Maj 202520min

AI lab TL;DR | Emmie Hine - Can Europe Lead the Open-Source AI Race?

AI lab TL;DR | Emmie Hine - Can Europe Lead the Open-Source AI Race?

🔍 In this TL;DR episode, Emmie Hine (Yale Digital Ethics Center) makes the case for Europe’s leadership in open-source AI—thanks to strong infrastructure, multilingual data, and regulatory clarity. W...

12 Maj 202511min

AI lab TL;DR | Milton Mueller - Why Regulating AI Misses the Point

AI lab TL;DR | Milton Mueller - Why Regulating AI Misses the Point

🔍 In this TL;DR episode, Milton Mueller (the Georgia Institute of Technology School of Public Policy) argues that what we call “AI” is really just part of a broader digital ecosystem. Instead of vagu...

21 Apr 202518min

AI lab TL;DR | Kevin Frazier - How Smarter Copyright Law Can Unlock Fairer AI

AI lab TL;DR | Kevin Frazier - How Smarter Copyright Law Can Unlock Fairer AI

🔍 In this TL;DR episode, Kevin Frazier (University of Texas at Austin school of Law) outlines a proposal to realign U.S. copyright law with its original goal of spreading knowledge. The discussion in...

7 Apr 202516min

AI lab TL;DR | Paul Keller - A Vocabulary for Opting Out of AI Training and TDM

AI lab TL;DR | Paul Keller - A Vocabulary for Opting Out of AI Training and TDM

🔍 In this TL;DR episode, Paul Keller (The Open Future Foundation) outlines a proposal for a common opt-out vocabulary to improve how EU copyright rules apply to AI training. The discussion introduces...

24 Mars 202515min

AI lab TL;DR |  João Pedro Quintais - Untangling AI Copyright and Data Mining in EU Compliance

AI lab TL;DR | João Pedro Quintais - Untangling AI Copyright and Data Mining in EU Compliance

🔍 In this TL;DR episode, João Quintais (Institute for Information Law) explains the interaction between the AI Act and EU copyright law, focusing on text and data mining (TDM). He unpacks key issues ...

3 Mars 202525min

Populärt inom Vetenskap

dumma-manniskor
svd-nyhetsartiklar
p3-dystopia
allt-du-velat-veta
kapitalet-en-podd-om-ekonomi
rss-ufo-bortom-rimligt-tvivel
rss-vetenskapsradion-2
det-morka-psyket
rss-vetenskapsradion
medicinvetarna
bildningspodden
sexet
rss-geopodden-2
vetenskapsradion
rss-experimentet
4health-med-anna-sparre
halsorevolutionen
rss-spraket
rss-arkeologi-historia-podden-som-graver-i-vart-kulturlandskap
hacka-livet