#7 - Julia Galef on making humanity more rational, what EA does wrong, and why Twitter isn’t all bad

#7 - Julia Galef on making humanity more rational, what EA does wrong, and why Twitter isn’t all bad

The scientific revolution in the 16th century was one of the biggest societal shifts in human history, driven by the discovery of new and better methods of figuring out who was right and who was wrong.

Julia Galef - a well-known writer and researcher focused on improving human judgment, especially about high stakes questions - believes that if we could again develop new techniques to predict the future, resolve disagreements and make sound decisions together, it could dramatically improve the world across the board. We brought her in to talk about her ideas.

This interview complements a new detailed review of whether and how to follow Julia’s career path. Apply for personalised coaching, see what questions are asked when, and read extra resources to learn more.

Julia has been host of the Rationally Speaking podcast since 2010, co-founder of the Center for Applied Rationality in 2012, and is currently working for the Open Philanthropy Project on an investigation of expert disagreements.

In our conversation we ended up speaking about a wide range of topics, including:

* Her research on how people can have productive intellectual disagreements.
* Why she once planned to become an urban designer.
* Why she doubts people are more rational than 200 years ago.
* What makes her a fan of Twitter (while I think it’s dystopian).
* Whether people should write more books.
* Whether it’s a good idea to run a podcast, and how she grew her audience.
* Why saying you don’t believe X often won’t convince people you don’t.
* Why she started a PhD in economics but then stopped.
* Whether she would recommend an unconventional career like her own.
* Whether the incentives in the intelligence community actually support sound thinking.
* Whether big institutions will actually pick up new tools for improving decision-making if they are developed.
* How to start out pursuing a career in which you enhance human judgement and foresight.

Get free, one-on-one career advice to help you improve judgement and decision-making

We’ve helped dozens of people compare between their options, get introductions, and jobs important for the the long-run future. **If you want to work on any of the problems discussed in this episode, find out if our coaching can help you:**

APPLY FOR COACHING

Overview of the conversation

**1m30s** So what projects are you working on at the moment?
**3m50s** How are you working on the problem of expert disagreement?
**6m0s** Is this the same method as the double crux process that was developed at the Center for Applied Rationality?
**10m** Why did the Open Philanthropy Project decide this was a very valuable project to fund?
**13m** Is the double crux process actually that effective?
**14m50s** Is Facebook dangerous?
**17m** What makes for a good life? Can you be mistaken about having a good life?
**19m** Should more people write books?
Read more...

Jaksot(317)

#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

Ryan Greenblatt — lead author on the explosive paper “Alignment faking in large language models” and chief scientist at Redwood Research — thinks there’s a 25% chance that within four years, AI will b...

8 Heinä 20252h 50min

#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand

#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand

The era of making AI smarter just by making it bigger is ending. But that doesn’t mean progress is slowing down — far from it. AI models continue to get much more powerful, just using very different m...

24 Kesä 20252h 48min

#218 – Hugh White on why Trump is abandoning US hegemony – and that’s probably good

#218 – Hugh White on why Trump is abandoning US hegemony – and that’s probably good

For decades, US allies have slept soundly under the protection of America’s overwhelming military might. Donald Trump — with his threats to ditch NATO, seize Greenland, and abandon Taiwan — seems hell...

12 Kesä 20252h 48min

#217 – Beth Barnes on the most important graph in AI right now — and the 7-month rule that governs its progress

#217 – Beth Barnes on the most important graph in AI right now — and the 7-month rule that governs its progress

AI models today have a 50% chance of successfully completing a task that would take an expert human one hour. Seven months ago, that number was roughly 30 minutes — and seven months before that, 15 mi...

2 Kesä 20253h 47min

Beyond human minds: The bewildering frontier of consciousness in insects, AI, and more

Beyond human minds: The bewildering frontier of consciousness in insects, AI, and more

What if there’s something it’s like to be a shrimp — or a chatbot?For centuries, humans have debated the nature of consciousness, often placing ourselves at the very top. But what about the minds of o...

23 Touko 20253h 34min

Don’t believe OpenAI’s “nonprofit” spin (emergency pod with Tyler Whitmer)

Don’t believe OpenAI’s “nonprofit” spin (emergency pod with Tyler Whitmer)

OpenAI’s recent announcement that its nonprofit would “retain control” of its for-profit business sounds reassuring. But this seemingly major concession, celebrated by so many, is in itself largely me...

15 Touko 20251h 12min

The case for and against AGI by 2030 (article by Benjamin Todd)

The case for and against AGI by 2030 (article by Benjamin Todd)

More and more people have been saying that we might have AGI (artificial general intelligence) before 2030. Is that really plausible? This article by Benjamin Todd looks into the cases for and against...

12 Touko 20251h

Emergency pod: Did OpenAI give up, or is this just a new trap? (with Rose Chan Loui)

Emergency pod: Did OpenAI give up, or is this just a new trap? (with Rose Chan Loui)

When attorneys general intervene in corporate affairs, it usually means something has gone seriously wrong. In OpenAI’s case, it appears to have forced a dramatic reversal of the company’s plans to si...

8 Touko 20251h 2min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
rss-valo-minussa-2
adhd-podi
psykologia
rss-narsisti
salainen-paivakirja
rss-liian-kuuma-peruna
rss-duodecim-lehti
rahapuhetta
aloita-meditaatio
rss-vapaudu-voimaasi
rss-niinku-asia-on
kesken
rss-luonnollinen-synnytys-podcast
aamukahvilla
rss-uskonto-on-tylsaa
rss-selvat-savelet
rss-koira-haudattuna