AI lab TL;DR | Jurgen Gravestein - The Intelligence Paradox

AI lab TL;DR | Jurgen Gravestein - The Intelligence Paradox

🔍 In this TL;DR episode, Jurgen Gravestein (Conversation Design Institute) discusses his Substack blog post delving into the ‘Intelligence Paradox’ with the AI lab

📌 TL;DR Highlights
⏲️[00:00] Intro
⏲️[01:08] Q1-The ‘Intelligence Paradox’:
How does the language used to describe AI lead to misconceptions and the so-called ‘Intelligence Paradox’?
⏲️[05:36] Q2-‘Conceptual Borrowing’:
What is ‘conceptual borrowing’ and how does it impact public perception and understanding of AI?
⏲️[10:04] Q3-Human vs AI ‘Learning’:
Why is it misleading to use the term ‘learning’ for AI processes and what this means for the future of AI development?
⏲️[14:11] Wrap-up & Outro

💭 Q1-The ‘Intelligence Paradox’

🗣️ What’s really interesting about chatbots and AI is that for the first time in human history, we have technology talking back at us, and that's doing a lot of interesting things to our brains.
🗣️ In the 1960s, there was an experiment with Chatbot Eliza, which was a very simple, pre-programmed chatbot (...) And it showed that when people are talking to technology, and technology talks back, we’re quite easily fooled by that technology. And that has to do with language fluency and how we perceive language.
🗣️ Language is a very powerful tool (...) there’s a correlation between perceived intelligence and language fluency (...) a social phenomenon that I like to call the ‘Intelligence Paradox’. (...) people perceive you as less smart, just because you are less fluent in how you’re able to express yourself.
🗣️ That also works the other way around with AI and chatbots (...). We saw that chatbots can now respond in extremely fluent language very flexibly. (...) And as a result of that, we perceive them as pretty smart. Smarter than they actually are, in fact.
🗣️ We tend to overestimate the capabilities of [AI] systems because of their language fluency, and we perceive them as smarter than they really are, and it leads to confusion (...) about how the technology actually works.

💭 Q2-‘Conceptual Borrowing’

🗣️ A research article (...) from two professors, Luciano Floridi and Anna Nobre, (...) explaining (...) conceptual borrowing [states]: “through extensive conceptual borrowing, AI has ended up describing computers anthropomorphically, as computational brains with psychological properties, while brain and cognitive sciences have ended up describing brains and minds computationally and informationally, as biological computers."
🗣️ Similar to the Intelligence Paradox, it can lead to confusion (...) about whether we underestimate or overestimate the impact of a certain technology. And that, in turn, informs how we make policies or regulate certain technologies now or in the future.
🗣️ A small example of conceptual borrowing would be the term “hallucinations”. (...) a common term to describe when systems like chatGPT say something that sounds very authoritative and sounds very correct and precise, but is actually made up, or partly confabulated. (...) this actually has nothing to do with real hallucinations [but] with statistical patterns that don’t match up with the question that’s being asked.

💭 Q3-Human vs AI ‘Learning’

🗣️ If you talk about conceptual borrowing, “machine learning” is a great example of that, too. (...) there's a very (...) big discrepancy between what learning is in the psychological terms and the biological terms when we talk about learning, and then when it comes to these systems.
🗣️ So if you actually start to be convinced that LLMs are as smart and learn as quickly as people or children (...) you could be over attributing qualities to these systems.
🗣️ [ARC-AGI challenge:] a $1 million USD prize pool for the first person that can build an AI to solve a new benchmark that (...) consists of very simple puzzles that a five-year old (...) could basically solve. (...) it hasn't been solved yet.
🗣️ That’s, again, an interesting way to look at learning, and especially where these systems fall short. [AI] can reason based on (...) the data that they've seen, but as soon as it (..) goes out of (...) what they've seen in their data set, they will struggle with whatever task they are being asked to perform.

📌 About Our Guest
🎙️ Jurgen Gravestein | Sr Conversation Designer, Conversation Design Institute (CDI)
𝕏 https://x.com/@gravestein1989
🌐 Blog Post | The Intelligence Paradox
https://jurgengravestein.substack.com/p/the-intelligence-paradox
🌐 Newsletter
https://jurgengravestein.substack.com
🌐 CDI
https://www.conversationdesigninstitute.com
🌐 Profs. Floridi & Nobre's article
http://dx.doi.org/10.2139/ssrn.4738331
🌐 Jurgen Gravestein
https://www.linkedin.com/in/jurgen-gravestein

Jurgen Gravestein is a writer, conversation designer and AI consultant. He works at the CDI, the world’s leading training and certification institute in conversational AI. He also runs a successful Substack newsletter “Teaching computers how to talk”.

Avsnitt(37)

AI lab TL;DR | Anna Tumadóttir - Rethinking Creator Consent in the Age of AI

AI lab TL;DR | Anna Tumadóttir - Rethinking Creator Consent in the Age of AI

🔍 In this TL;DR episode, Anna Tumadóttir (Creative Commons) discusses how the evolution of creator consent and AI has reshaped perspectives on openness, highlighting the challenges of balancing creat...

10 Feb 202519min

 AI lab TL;DR | Carys J. Craig - The Copyright Trap and AI Policy

AI lab TL;DR | Carys J. Craig - The Copyright Trap and AI Policy

🔍 In this TL;DR episode, Carys J Craig (Osgoode Professional Development) explains the "copyright trap" in AI regulation, where relying on copyright favors corporate interests over creativity. She ch...

27 Jan 202529min

AI lab TL;DR | Ariadna Matas - Should Institutions Enable or Prevent Cultural Data Mining?

AI lab TL;DR | Ariadna Matas - Should Institutions Enable or Prevent Cultural Data Mining?

🔍 In this TL;DR episode, Ariadna Matas (Europeana Foundation) discusses how the 2019 Copyright Directive has influenced text and data mining practices in cultural heritage institutions, highlighting ...

13 Jan 202516min

AI lab TL;DR | Martin Senftleben - How Copyright Challenges AI Innovation and Creativity

AI lab TL;DR | Martin Senftleben - How Copyright Challenges AI Innovation and Creativity

🔍 In this TL;DR episode, Martin Senftleben (Institute for Information Law (IViR) & University of Amsterdam) discusses how EU regulations, including the AI Act and copyright frameworks, impose heavy b...

16 Dec 202410min

AI lab TL;DR | Mark Lemley - How Generative AI Disrupts Traditional Copyright Law

AI lab TL;DR | Mark Lemley - How Generative AI Disrupts Traditional Copyright Law

🔍 In this TL;DR episode, Mark Lemley (Stanford Law School) discusses how generative AI challenges traditional copyright doctrines, such as the idea-expression dichotomy and substantial similarity tes...

25 Nov 20248min

AI lab TL;DR | Jacob Mchangama - Are AI Chatbot Restrictions Threatening Free Speech?

AI lab TL;DR | Jacob Mchangama - Are AI Chatbot Restrictions Threatening Free Speech?

🔍 In this TL;DR episode, Jacob Mchangama (The Future of Free Speech & Vanderbilt University) discusses the high rate of AI chatbot refusals to generate content for controversial prompts, examining ho...

4 Nov 202416min

AI lab TL;DR | Stefaan G. Verhulst - Are we entering a Data Winter?

AI lab TL;DR | Stefaan G. Verhulst - Are we entering a Data Winter?

🔍 In this TL;DR episode, Dr. Stefaan G. Verhulst (The GovLab & The Data Tank) discusses his Frontiers Policy Labs contribution on the urgent need to preserve data access for the public interest with ...

30 Sep 202412min

Populärt inom Vetenskap

p3-dystopia
dumma-manniskor
svd-nyhetsartiklar
kapitalet-en-podd-om-ekonomi
doden-hjarnan-kemisten
allt-du-velat-veta
rss-ufo-bortom-rimligt-tvivel-2
rss-vetenskapsradion
bildningspodden
det-morka-psyket
rss-vetenskapsradion-2
paranormalt-med-caroline-giertz
medicinvetarna
sexet
rss-spraket
dumforklarat
har-vi-akt-till-mars-an
rss-experimentet
barnpsykologerna
vetenskapsradion