AI lab TL;DR | Anna Mills and Nate Angell - The Mirage of Machine Intelligence

AI lab TL;DR | Anna Mills and Nate Angell - The Mirage of Machine Intelligence

🔍 In this TL;DR episode, Anna and Nate unpack why calling AI outputs “hallucinations” misses the mark—and introduce “AI Mirage” as a sharper, more accurate metaphor. From scoring alternative terms to sparking social media debates, they show how language shapes our assumptions, trust, and agency in the age of generative AI. The takeaway: choosing the right words is a hopeful act of shaping our AI future.


📌 TL;DR Highlights

⏲️[00:00] Intro

⏲️[00:42] Q1-What’s wrong with the term “AI hallucination” — and how does “mirage” help?

⏲️[05:30] Q2-Why did “mirage” stand out among 80+ alternatives?

⏲️[10:30] Q3-How should this shift in language impact educators, journalists, or policymakers?

⏲️[10:10] Wrap-up & Outro


💭 Q1 - What’s wrong with the term “AI hallucination” — and how does “mirage” help?


🗣️ "There's no reason to think that AI is experiencing something, that it has a belief about what's real or what's not." (Anna)

🗣️ "It anthropomorphizes AI, and it also misleads us to think that this might be a technically fixable problem—as a person might take medication for mental illness—that maybe AI could be induced not to hallucinate." (Anna)

🗣️ "I did come up with my own criteria, which included: not implying that AI has intent or consciousness, implying that outputs don't match reality in some way, showing a connection to the patterns in the training data ideally, but also showing that AI can go beyond training data." (Anna)

🗣️ "The words used to describe different technologies can sometimes steer people in directions in relation to them that aren’t really beneficial." (Nate)

🗣️ "Just like how a desert produces a mirage under certain circumstances... It’s the same with AI. There’s a system at play... that can produce a certain situation, which can then be perceived by an observer as possibly misleading, inaccurate, or counterfactual." (Nate)


💭 Q2 - Why did “mirage” stand out among 80+ alternatives?


🗣️ "I actually went through and rated each term numerically on each of those criteria and did kind of a simple averaging of that to see which terms scored the highest." (Anna)

🗣️ "We decided that it was misleading to say 'Data Mirage,' because people would think the problem was in the data... and that’s not the case. So we ditched the 'data' part and just landed on 'AI Mirage'." (Anna)

🗣️ "We kind of realized, as we were discussing 'Mirage,' how important it was that it centered human judgment—and that wasn’t initially one of the criteria." (Anna)

🗣️ "Even when we know how it works and we know it’s wrong, sometimes there’s still that temptation... to say, 'Wow, I think it really nailed it this time.'" (Anna)

🗣️ "We really wanted to encourage this ongoing interrogation of the metaphors we use and the language we use, and how they're affecting our relationship with AI." (Anna)


💭 Q3 - How should this shift in language impact educators, journalists, or policymakers?

🗣️ "How do we build systems and train ourselves to think about how we want to interact with them, stay in control, and still be the ones making judgments and choices?" (Anna)

🗣️ "We are participating in shaping that future, and it’s not over. We don’t have to just capitulate and accept the term that’s used. We don’t have to accept someone’s vision of what AGI is going to be in five years. We’re all shaping this." (Anna)

🗣️ "In a way, it doesn’t really matter what term you end up with—just asking the question of whether 'hallucination' is a useful or accurate term can spark a really interesting and valuable discussion." (Nate)

🗣️ "There are many systemic issues we should be thinking about with AI. But I also believe in the power of the damning—of the words we use to talk about it—as being an important factor in all that." (Nate)

🗣️ "It’s useful for us as humans to have different words for those outputs we deem unexpected,

incorrect, or counterfactual. It helps us to talk about when an AI mirages rather than dumping all its outputs into one big undifferentiated basket." (Nate)


📌 About Our Guests

🎙️ Anna Mills | College of Marin

🌐 Anna Mills

linkedin.com/in/anna-mills-oer

🎙️ Nate Angell | Nudgital

🌐 Nate Angell

linkedin.com/in/nateangell


🌐 Article | Are We Tripping? The Mirage of AI Hallucinations

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5127162


Anna is a college writing instructor and a leading advocate for AI literacy in education, building on her combined teaching experience and technical knowledge.

Nate is the founder of Nudgital, a company that builds sustainability and growth at the intersection of communications, community, technology, and strategy.

#AI #ArtificialIntelligence #GenerativeAI

Avsnitt(37)

AI lab TL;DR | Joan Barata - Transparency Obligations for All AI Systems

AI lab TL;DR | Joan Barata - Transparency Obligations for All AI Systems

🔍 In this TL;DR episode, Joan explains how Article 50 of the EU AI Act sets out high-level transparency obligations for AI developers and deployers—requiring users to be informed when they interact w...

10 Dec 202517min

AI lab TL;DR | Aline Larroyed - The Fallacy Of The File

AI lab TL;DR | Aline Larroyed - The Fallacy Of The File

🔍 In this episode, Caroline and Alene unravel why the popular idea of “AI memorisation” leads policymakers down the wrong path—and how this metaphor obscures what actually happens inside large langua...

27 Nov 20257min

AI lab TL;DR | Emmie Hine - Can Europe Lead the Open-Source AI Race?

AI lab TL;DR | Emmie Hine - Can Europe Lead the Open-Source AI Race?

🔍 In this TL;DR episode, Emmie Hine (Yale Digital Ethics Center) makes the case for Europe’s leadership in open-source AI—thanks to strong infrastructure, multilingual data, and regulatory clarity. W...

12 Maj 202511min

AI lab TL;DR | Milton Mueller - Why Regulating AI Misses the Point

AI lab TL;DR | Milton Mueller - Why Regulating AI Misses the Point

🔍 In this TL;DR episode, Milton Mueller (the Georgia Institute of Technology School of Public Policy) argues that what we call “AI” is really just part of a broader digital ecosystem. Instead of vagu...

21 Apr 202518min

AI lab TL;DR | Kevin Frazier - How Smarter Copyright Law Can Unlock Fairer AI

AI lab TL;DR | Kevin Frazier - How Smarter Copyright Law Can Unlock Fairer AI

🔍 In this TL;DR episode, Kevin Frazier (University of Texas at Austin school of Law) outlines a proposal to realign U.S. copyright law with its original goal of spreading knowledge. The discussion in...

7 Apr 202516min

AI lab TL;DR | Paul Keller - A Vocabulary for Opting Out of AI Training and TDM

AI lab TL;DR | Paul Keller - A Vocabulary for Opting Out of AI Training and TDM

🔍 In this TL;DR episode, Paul Keller (The Open Future Foundation) outlines a proposal for a common opt-out vocabulary to improve how EU copyright rules apply to AI training. The discussion introduces...

24 Mars 202515min

AI lab TL;DR |  João Pedro Quintais - Untangling AI Copyright and Data Mining in EU Compliance

AI lab TL;DR | João Pedro Quintais - Untangling AI Copyright and Data Mining in EU Compliance

🔍 In this TL;DR episode, João Quintais (Institute for Information Law) explains the interaction between the AI Act and EU copyright law, focusing on text and data mining (TDM). He unpacks key issues ...

3 Mars 202525min

Populärt inom Vetenskap

dumma-manniskor
svd-nyhetsartiklar
p3-dystopia
allt-du-velat-veta
kapitalet-en-podd-om-ekonomi
rss-ufo-bortom-rimligt-tvivel
rss-vetenskapsradion-2
det-morka-psyket
rss-vetenskapsradion
medicinvetarna
bildningspodden
sexet
rss-geopodden-2
vetenskapsradion
rss-experimentet
4health-med-anna-sparre
halsorevolutionen
rss-spraket
rss-arkeologi-historia-podden-som-graver-i-vart-kulturlandskap
hacka-livet