#7 - Julia Galef on making humanity more rational, what EA does wrong, and why Twitter isn’t all bad

#7 - Julia Galef on making humanity more rational, what EA does wrong, and why Twitter isn’t all bad

The scientific revolution in the 16th century was one of the biggest societal shifts in human history, driven by the discovery of new and better methods of figuring out who was right and who was wrong.

Julia Galef - a well-known writer and researcher focused on improving human judgment, especially about high stakes questions - believes that if we could again develop new techniques to predict the future, resolve disagreements and make sound decisions together, it could dramatically improve the world across the board. We brought her in to talk about her ideas.

This interview complements a new detailed review of whether and how to follow Julia’s career path. Apply for personalised coaching, see what questions are asked when, and read extra resources to learn more.

Julia has been host of the Rationally Speaking podcast since 2010, co-founder of the Center for Applied Rationality in 2012, and is currently working for the Open Philanthropy Project on an investigation of expert disagreements.

In our conversation we ended up speaking about a wide range of topics, including:

* Her research on how people can have productive intellectual disagreements.
* Why she once planned to become an urban designer.
* Why she doubts people are more rational than 200 years ago.
* What makes her a fan of Twitter (while I think it’s dystopian).
* Whether people should write more books.
* Whether it’s a good idea to run a podcast, and how she grew her audience.
* Why saying you don’t believe X often won’t convince people you don’t.
* Why she started a PhD in economics but then stopped.
* Whether she would recommend an unconventional career like her own.
* Whether the incentives in the intelligence community actually support sound thinking.
* Whether big institutions will actually pick up new tools for improving decision-making if they are developed.
* How to start out pursuing a career in which you enhance human judgement and foresight.

Get free, one-on-one career advice to help you improve judgement and decision-making

We’ve helped dozens of people compare between their options, get introductions, and jobs important for the the long-run future. **If you want to work on any of the problems discussed in this episode, find out if our coaching can help you:**

APPLY FOR COACHING

Overview of the conversation

**1m30s** So what projects are you working on at the moment?
**3m50s** How are you working on the problem of expert disagreement?
**6m0s** Is this the same method as the double crux process that was developed at the Center for Applied Rationality?
**10m** Why did the Open Philanthropy Project decide this was a very valuable project to fund?
**13m** Is the double crux process actually that effective?
**14m50s** Is Facebook dangerous?
**17m** What makes for a good life? Can you be mistaken about having a good life?
**19m** Should more people write books?
Read more...

Jaksot(321)

#228 – Eileen Yam on how we're completely out of touch with what the public thinks about AI

#228 – Eileen Yam on how we're completely out of touch with what the public thinks about AI

If you work in AI, you probably think it’s going to boost productivity, create wealth, advance science, and improve your life. If you’re a member of the American public, you probably strongly disagree...

20 Marras 20251h 43min

OpenAI: The nonprofit refuses to be killed (with Tyler Whitmer)

OpenAI: The nonprofit refuses to be killed (with Tyler Whitmer)

Last December, the OpenAI business put forward a plan to completely sideline its nonprofit board. But two state attorneys general have now blocked that effort and kept that board very much alive and k...

11 Marras 20251h 56min

#227 – Helen Toner on the geopolitics of AGI in China and the Middle East

#227 – Helen Toner on the geopolitics of AGI in China and the Middle East

With the US racing to develop AGI and superintelligence ahead of China, you might expect the two countries to be negotiating how they’ll deploy AI, including in the military, without coming to blows. ...

5 Marras 20252h 20min

#226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes

#226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes

For years, working on AI safety usually meant theorising about the ‘alignment problem’ or trying to convince other people to give a damn. If you could find any way to help, the work was frustrating an...

30 Loka 20254h 30min

#225 – Daniel Kokotajlo on what a hyperspeed robot economy might look like

#225 – Daniel Kokotajlo on what a hyperspeed robot economy might look like

When Daniel Kokotajlo talks to security experts at major AI labs, they tell him something chilling: “Of course we’re probably penetrated by the CCP already, and if they really wanted something, they c...

27 Loka 20252h 12min

#224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

#224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

Conventional wisdom is that safeguarding humanity from the worst biological risks — microbes optimised to kill as many as possible — is difficult bordering on impossible, making bioweapons humanity’s ...

2 Loka 20252h 31min

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Jake Sullivan was the US National Security Advisor from 2021-2025. He joined our friends on The Cognitive Revolution podcast in August to discuss AI as a critical national security issue. We thought i...

26 Syys 20251h 5min

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret?...

15 Syys 20251h 46min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
rss-narsisti
rss-liian-kuuma-peruna
rss-vapaudu-voimaasi
dear-ladies
psykologia
leveli
rss-duodecim-lehti
rss-valo-minussa-2
kesken
ihminen-tavattavissa-tommy-hellsten-instituutti
rss-uskonto-on-tylsaa
rss-hereilla
adhd-podi
rss-tietoinen-yhteys-podcast-2
rss-ai-mita-siskopodcast
rss-luonnollinen-synnytys-podcast
rss-arkea-ja-aurinkoa-podcast-espanjasta