Professor Shannon Vallor on the AI Mirror

Professor Shannon Vallor on the AI Mirror

What if we saw Artificial Intelligence as a mirror rather than as a form of intelligence?

That’s the subject of a fabulous new book by Professor Shannon Vallor, who is my guest on this episode.

In our discussion, we explore how artificial intelligence reflects not only our technological prowess but also our ethical choices, biases, and the collective values that shape our world.

We also discuss how AI systems mirror our societal flaws, raising critical questions about accountability, transparency, and the role of ethics in AI development.

Shannon helps me to examine the risks and opportunities presented by AI, particularly in the context of decision-making, privacy, and the potential for AI to influence societal norms and behaviours.

This episode offers a thought-provoking exploration of the intersection between technology and ethics, urging us to consider how we can steer AI development in a direction that aligns with our shared values.

Guest Biography

Prof. Shannon Vallor is the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute (EFI) at the University of Edinburgh, where she is also appointed in Philosophy.

She is Director of the Centre for Technomoral Futures in EFI, and co-Director of the BRAID (Bridging Responsible AI Divides) programme, funded by the Arts and Humanities Research Council. Professor Vallor's research explores how new technologies, especially AI, robotics, and data science, reshape human moral character, habits, and practices.

Her work includes advising policymakers and industry on the ethical design and use of AI. She is a standing member of the One Hundred Year Study of Artificial Intelligence (AI100) and a member of the Oversight Board of the Ada Lovelace Institute. Professor Vallor received the 2015 World Technology Award in Ethics from the World Technology Network and the 2022 Covey Award from the International Association of Computing and Philosophy.

She is a former Visiting Researcher and AI Ethicist at Google. In addition to her many articles and published educational modules on the ethics of data, robotics, and artificial intelligence, she is the author of the book Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford University Press, 2016) and The AI Mirror: Reclaiming Our Humanity in an Age of Machine Thinking (Oxford University Press, 2024).

Links
Shannon's website: https://www.shannonvallor.net/
The AI Mirror: https://global.oup.com/academic/product/the-ai-mirror-9780197759066?A Noema essay by Shannon on the dangers of AI: https://www.noemamag.com/the-danger-of-superhuman-ai-is-not-what-you-think/
A New Yorker feature on the book https://www.newyorker.com/culture/open-questions/in-the-age-of-ai-what-makes-people-unique
The AI Mirror as one of the FT’s technology books of the summer https://www.ft.com/content/77914d8e-9959-4f97-98b0-aba5dffd581c
The FT review of The AI Mirror: https://www.ft.com/content/67d38081-82d3-4979-806a-eba0099f8011
The Edinburgh Futures Institute: https://efi.ed.ac.uk/
The clip from the movie "Real Genius' which she refers to: https://www.youtube.com/watch?v=wB1X4o-MV6o

AI Generated Timestamped Summary of Key Points:

00:02:30: Introduction to Professor Shannon Vallor and her work.

00:06:15: Discussion on AI as a mirror of societal values.

00:10:45: The ethical implications of AI decision-making. 00:18:20: How AI reflects human biases and the importance of transparency.

00:25:50: The role of ethics in AI development and deployment.

00:33:10: Challenges of integrating AI into human-centred contexts.

00:41:30: The potential for AI to shape societal norms and behaviours.

00:50:15: Professor Vallor’s insights on the future of AI and ethics.

00:58:00: Closing thoughts and reflections on AI’s impact on humanity.

Tämä jakso on lisätty Podme-palveluun avoimen RSS-syötteen kautta eikä se ole Podmen omaa tuotantoa. Siksi jakso saattaa sisältää mainontaa.

Jaksot(368)

Ruth Steinholtz on Ethical Cultures

Ruth Steinholtz on Ethical Cultures

Ruth Steinholtz is the founder of AretéWork, a company that advises on how to build sustainable effective ethical cultures in organisations. She was formerly General Counsel of Borealis and has autho...

15 Joulu 201950min

Dr Roger Miles on Conduct Risk - what is it & how can we manage it?

Dr Roger Miles on Conduct Risk - what is it & how can we manage it?

Dr Roger Miles is an expert in Conduct. In this episode, I explore with him what that means and how he came to specialise in it. Using stories ranging from Reality TV shows, to the 1902 Hanoi Rat P...

7 Joulu 201951min

Tom & Christian's 2nd Human Risk Talk

Tom & Christian's 2nd Human Risk Talk

In this episode, I'm joined again by co-host Tom Hardin. Together we explore Human Risk related stories we've come across that we think are worth diving into in more detail.You can hear Tom's story i...

28 Marras 201936min

Ricardo Pellafone on why Compliance Design isn't an oxymoron

Ricardo Pellafone on why Compliance Design isn't an oxymoron

In this episode, I speak to Ricardo Pellafone, the founder of The Broadcat, a Compliance Design Company. He describes himself as "a startup founder focused on making law accessible to non-lawyers thr...

20 Marras 201949min

Tom & Christian's Human Risk Talk

Tom & Christian's Human Risk Talk

In this episode, I'm launching a new recurring feature on the podcast.In these Human Risk Talks, I'll be joined by co-host Tom Hardin and together we'll explore Human Risk related stories we've come a...

7 Marras 201931min

Tom Hardin on his experience as FBI Informant Tipper X

Tom Hardin on his experience as FBI Informant Tipper X

How does someone become an FBI informant? That's the story my guest Tom Hardin shares with me on this re-recorded episode. Tom and I originally spoke in 2019, but we thought we'd re-record the episo...

31 Loka 201952min

Professor Yuval Feldman on why we should write rules for good people not bad people

Professor Yuval Feldman on why we should write rules for good people not bad people

We have laws to protect us from the actions of 'bad' people. But why might writing laws for 'bad' people actually be a bad idea? That's what my guest, Professor Yuval Feldman, asks in his research an...

11 Loka 201939min

Preview: The Human Risk Podcast

Preview: The Human Risk Podcast

Coming soon is The Human Risk podcast, looking at how Behavioural Science can help manage the biggest risk facing most organisations: people!

7 Loka 201923s

Suosittua kategoriassa Tiede

rss-poliisin-mieli
tiedekulma-podcast
rss-mita-tulisi-tietaa
docemilia
filocast-filosofian-perusteet
menologeja-tutkimusmatka-vaihdevuosiin
rss-duodecim-lehti
rss-tiedetta-vai-tarinaa
sotataidon-ytimessa
rss-lapsuuden-rakentajat-podcast
rss-lihavuudesta-podcast
utelias-mieli
radio-antro
rss-bios-podcast
rss-metsantuntijat-podcast
rss-luontopodi-samuel-glassar-tutkii-luonnon-ihmeita
rss-sosiopodi