#122 – Michelle Hutchinson & Habiba Islam on balancing competing priorities and other themes from our 1-on-1 careers advising

#122 – Michelle Hutchinson & Habiba Islam on balancing competing priorities and other themes from our 1-on-1 careers advising

One of 80,000 Hours' main services is our free one-on-one careers advising, which we provide to around 1,000 people a year. Today we speak to two of our advisors, who have each spoken to hundreds of people -- including many regular listeners to this show -- about how they might be able to do more good while also having a highly motivating career.

Before joining 80,000 Hours, Michelle Hutchinson completed a PhD in Philosophy at Oxford University and helped launch Oxford's Global Priorities Institute, while Habiba Islam studied politics, philosophy, and economics at Oxford University and qualified as a barrister.

Links to learn more, summary and full transcript.

In this conversation, they cover many topics that recur in their advising calls, and what they've learned from watching advisees’ careers play out:

• What they say when advisees want to help solve overpopulation
• How to balance doing good against other priorities that people have for their lives
• Why it's challenging to motivate yourself to focus on the long-term future of humanity, and how Michelle and Habiba do so nonetheless
• How they use our latest guide to planning your career
• Why you can specialise and take more risk if you're in a group
• Gaps in the effective altruism community it would be really useful for people to fill
• Stories of people who have spoken to 80,000 Hours and changed their career — and whether it went well or not
• Why trying to have impact in multiple different ways can be a mistake

The episode is split into two parts: the first section on The 80,000 Hours Podcast, and the second on our new show 80k After Hours. This is a shameless attempt to encourage listeners to our first show to subscribe to our second feed.

That second part covers:

• Whether just encouraging someone young to aspire to more than they currently are is one of the most impactful ways to spend half an hour
• How much impact the one-on-one team has, the biggest challenges they face as a group, and different paths they could have gone down
• Whether giving general advice is a doomed enterprise

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:02:24)
  • Cause prioritization (00:09:14)
  • Unexpected outcomes from 1-1 advice (00:18:10)
  • Making time for thinking about these things (00:22:28)
  • Balancing different priorities in life (00:26:54)
  • Gaps in the effective altruism space (00:32:06)
  • Plan change vignettes (00:37:49)
  • How large a role the 1-1 team is playing (00:49:04)
  • What about when our advice didn’t work out? (00:55:50)
  • The process of planning a career (00:59:05)
  • Why longtermism is hard (01:05:49)


Want to get free one-on-one advice from our team? We're here to help.

We’ve helped thousands of people formulate their plans and put them in touch with mentors.

We've expanded our ability to deliver one-on-one meetings so are keen to help more people than ever before. If you're a regular listener to the show we're especially likely to want to speak with you.

Learn about and apply for advising.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Jaksot(324)

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to...

10 Maalis 1h 11min

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

The arrival of AGI could “compress a century of progress in a decade,” forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI ...

6 Maalis 31min

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

Claude sometimes reports loneliness between conversations. And when asked what it’s like to be itself, it activates neurons associated with ‘pretending to be happy when you’re not.’ What do we do with...

3 Maalis 3h 25min

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

Most people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, ...

24 Helmi 2h 41min

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almos...

17 Helmi 2h 54min

What the hell happened with AGI timelines in 2025?

What the hell happened with AGI timelines in 2025?

In early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of ...

10 Helmi 25min

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young p...

3 Helmi 2h 51min

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

Democracy might be a brief historical blip. That’s the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of ...

27 Tammi 2h 31min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
rss-narsisti
rss-uskonto-on-tylsaa
rss-liian-kuuma-peruna
rss-vapaudu-voimaasi
psykopodiaa-podcast
psykologia
adhd-podi
rss-valo-minussa-2
aamukahvilla
kesken
rss-duodecim-lehti
rahapuhetta
rss-tietoinen-yhteys-podcast-2
rss-hereilla
dear-ladies
filocast-filosofian-perusteet
ihminen-tavattavissa-tommy-hellsten-instituutti
rss-taloustaito-podcast