AI lab TL;DR | Jacob Mchangama - Are AI Chatbot Restrictions Threatening Free Speech?

AI lab TL;DR | Jacob Mchangama - Are AI Chatbot Restrictions Threatening Free Speech?

🔍 In this TL;DR episode, Jacob Mchangama (The Future of Free Speech & Vanderbilt University) discusses the high rate of AI chatbot refusals to generate content for controversial prompts, examining how this may conflict with the principles of free speech and access to diverse information.


📌 TL;DR Highlights

⏲️[00:00] Intro

⏲️[00:51] Q1-How does the high rate of refusal by chatbots to generate content conflict with the principles of free speech and access to information?

⏲️[06:53] Q2-Could AI chatbot self-censorship conflict with the systemic risk provisions of the Digital Services Act (DSA)?

⏲️[10:20] Q3-What changes would you recommend to better align chatbot moderation policies with free speech protections?

⏲️[15:18] Wrap-up & Outro


💭 Q1 - How does the high rate of refusal by chatbots to generate content conflict with the principles of free speech and access to information?


🗣️ "This is the first time in human history that new communications technology does not solely depend on human input, like the printing press or radio."

🗣️ "Limiting or restricting the output and even the ability to make prompts will necessarily affect the underlying capability to reinforce free speech, and especially access to information."

🗣️ "If I interact with an AI chatbot, it's me and the AI system, so it seems counterintuitive that the restrictions on AI chatbots are more wide-ranging than those on social media."

🗣️ "Would it be acceptable to ordinary users to say, you're writing a document on blasphemy, and then Word says, 'I can't complete that sentence because it violates our policies'?"

🗣️ "The boundary between freedom of speech being in danger and freedom of thought being affected is a very narrow one."

🗣️ "Under international human rights law, freedom of thought is absolute, but algorithmic restrictions risk subtly interfering with that freedom.(...) These restrictions risk being tentacles into freedom of thought, subtly guiding us in ways we might not even notice."


💭 Q2 - Could AI chatbot self-censorship conflict with the systemic risk provisions of the Digital Services Act (DSA)?


🗣️ "The AI act includes an obligation to assess and mitigate systemic risk, which could be relevant here regarding generative AI’s impact on free expression."

🗣️ "The AI act defines systemic risk as a risk that is specific to the high-impact capabilities of general-purpose AI models that could affect public health, safety, or fundamental rights."

🗣️ "The question is whether the interpretation under the AI act would lean more in a speech protective or a speech restrictive manner."

🗣️ "Overly broad restrictions could undermine freedom of expression in the Charter of Fundamental Rights, which is part of EU law."

🗣️ "My instinct is that the AI act would likely lean in a more speech-restrictive way, but it's too early to say for certain."


💭 Q3 - What changes would you recommend to better align chatbot moderation policies with free speech protections?


🗣️ "Let’s use international human rights law as a benchmark—something most major social media platforms commit to on paper but don’t live up to in practice."

🗣️ "We showed that major social media platforms' hate speech policies have undergone extensive scope creep over the past decade, which does not align with international human rights standards."

🗣️ "It's conceptually more difficult to apply international human rights standards to an AI chatbot because my interaction is private, unlike public speech."

🗣️ "We should avoid adopting a 'harm-oriented' principle to AI chatbots, especially when dealing with disinformation and misinformation, which is often protected under freedom of expression."

🗣️ "It's important to maintain an iterative process with AI systems, where humans remain responsible for how we use and share information, rather than placing all the responsibility on the chatbot."


📌 About Our Guest

🎙️ Jacob Mchangama | The Future of Free Speech & Vanderbilt University

𝕏 https://x.com/@JMchangama

🌐 Article | AI chatbots refuse to produce ‘controversial’ output − why that’s a free speech problem

https://theconversation.com/ai-chatbots-refuse-to-produce-controversial-output-why-thats-a-free-speech-problem-226596

🌐 The Future of Free Speech

https://futurefreespeech.org

🌐 Jacob Mchangama

http://jacobmchangama.com


Jacob Mchangama is the Executive Director of The Future of Free Speech and a Research Professor at Vanderbilt University. He is also a Senior Fellow at The Foundation for Individual Rights and Expression (FIRE) and author of “Free Speech: A History From Socrates to Social Media”.

Jaksot(37)

AI lab TL;DR | Anna Tumadóttir - Rethinking Creator Consent in the Age of AI

AI lab TL;DR | Anna Tumadóttir - Rethinking Creator Consent in the Age of AI

🔍 In this TL;DR episode, Anna Tumadóttir (Creative Commons) discusses how the evolution of creator consent and AI has reshaped perspectives on openness, highlighting the challenges of balancing creat...

10 Helmi 202519min

 AI lab TL;DR | Carys J. Craig - The Copyright Trap and AI Policy

AI lab TL;DR | Carys J. Craig - The Copyright Trap and AI Policy

🔍 In this TL;DR episode, Carys J Craig (Osgoode Professional Development) explains the "copyright trap" in AI regulation, where relying on copyright favors corporate interests over creativity. She ch...

27 Tammi 202529min

AI lab TL;DR | Ariadna Matas - Should Institutions Enable or Prevent Cultural Data Mining?

AI lab TL;DR | Ariadna Matas - Should Institutions Enable or Prevent Cultural Data Mining?

🔍 In this TL;DR episode, Ariadna Matas (Europeana Foundation) discusses how the 2019 Copyright Directive has influenced text and data mining practices in cultural heritage institutions, highlighting ...

13 Tammi 202516min

AI lab TL;DR | Martin Senftleben - How Copyright Challenges AI Innovation and Creativity

AI lab TL;DR | Martin Senftleben - How Copyright Challenges AI Innovation and Creativity

🔍 In this TL;DR episode, Martin Senftleben (Institute for Information Law (IViR) & University of Amsterdam) discusses how EU regulations, including the AI Act and copyright frameworks, impose heavy b...

16 Joulu 202410min

AI lab TL;DR | Mark Lemley - How Generative AI Disrupts Traditional Copyright Law

AI lab TL;DR | Mark Lemley - How Generative AI Disrupts Traditional Copyright Law

🔍 In this TL;DR episode, Mark Lemley (Stanford Law School) discusses how generative AI challenges traditional copyright doctrines, such as the idea-expression dichotomy and substantial similarity tes...

25 Marras 20248min

AI lab TL;DR | Jurgen Gravestein - The Intelligence Paradox

AI lab TL;DR | Jurgen Gravestein - The Intelligence Paradox

🔍 In this TL;DR episode, Jurgen Gravestein (Conversation Design Institute) discusses his Substack blog post delving into the ‘Intelligence Paradox’ with the AI lab📌 TL;DR Highlights⏲️[00:00] Intro⏲️...

21 Loka 202414min

AI lab TL;DR | Stefaan G. Verhulst - Are we entering a Data Winter?

AI lab TL;DR | Stefaan G. Verhulst - Are we entering a Data Winter?

🔍 In this TL;DR episode, Dr. Stefaan G. Verhulst (The GovLab & The Data Tank) discusses his Frontiers Policy Labs contribution on the urgent need to preserve data access for the public interest with ...

30 Syys 202412min

Suosittua kategoriassa Tiede

rss-mita-tulisi-tietaa
rss-poliisin-mieli
rss-lihavuudesta-podcast
rss-duodecim-lehti
utelias-mieli
tiedekulma-podcast
docemilia
rss-tervetta-skeptisyytta
mielipaivakirja
rss-ylistys-elaimille
koodikahvit
menologeja-tutkimusmatka-vaihdevuosiin
rss-ranskaa-raakana
rss-metsa
rss-luontopodi-samuel-glassar-tutkii-luonnon-ihmeita