“Untitled Retrospective and Learnings from AI in Context’s First Two VideosDraft” by ChanaMessinger

“Untitled Retrospective and Learnings from AI in Context’s First Two VideosDraft” by ChanaMessinger

Note: I used LLMs to draft different parts of this. I've checked almost everything, but there might be some mistakes remaining.

Apologies for posting this on Christmas Eve. I wanted to get this out the door before the end of the year. Questions welcome, and if it's easy to pull metrics to answer them, I will.

Summary

80,000 Hours launched a video program in 2025 focused on longform, cinematic, personality-driven content about AI risks. Our first two longform releases were:

  • We're Not Ready for Superintelligence (the "AI 2027" video): 8.9M views, ~1.4M watch hours
  • If you remember one AI disaster, make it this one (the "MechaHitler" video): 2.7M views, ~419K watch hours

Both videos significantly outperformed our expectations (we'd anticipated 15-50K views for the first). The cost per engagement hour ($0.11 and $0.39 respectively, including staff time) compares favorably to other 80,000 Hours programs.

This post covers: what we spent, what we got, why we think it worked, and what we'd do differently.

The numbersCostsCategoryAI 2027MechaHitlerDirect costs~$50K~$64KStaff hours~450 hrs~450 hrs (Note, I’m assuming it's about the same as for AI 2027, I didn't re-ask people how much time they spent.)Total cost (making some assumptions about we should incorporate staff [...]

---

Outline:

(00:34) Summary

(01:33) The numbers

(01:36) Costs

(02:16) Timing

(02:40) Results

(03:46) How valuable is a video watch hour?

(04:24) Qualitative Feedback

(04:28) AI 2027

(05:51) MechaHitler

(06:12) YouTube commenters like:

(06:52) What the comments don't like:

(07:17) Qualitative Analysis

(07:21) Why we think AI 2027 did well

(09:56) Why MechaHitler did less well (but still well)

(10:50) Lessons Learned

(10:54) Overall what we think matters

(11:25) Our guess at what's less important (though we're certainly unsure, maybe if we nailed these, we'd get more success)

(12:24) How our production works

(12:43) The timeline

(13:32) Ideation

(14:06) Scripting

(14:57) Shooting

(15:31) Reshoots / Voiceover

(15:45) Editing

(16:06) Launch

(17:00) What we're still figuring out

(17:36) Closing thoughts

---

First published:
December 24th, 2025

Source:
https://forum.effectivealtruism.org/posts/RCRaBYSqBaMzHzTjF/untitled-retrospective-and-learnings-from-ai-in-context-s

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Avsnitt(250)

[Linkpost] “Starfish” by Aaron Gertler 🔸

[Linkpost] “Starfish” by Aaron Gertler 🔸

This is a link post. By Alexander Wales Thousands of starfish had washed up on the beach, and a little girl was diligently throwing them back into the water, one at a time. A man came up to the girl a...

1 Maj 4min

“Time Sensitive Urgent Animal Welfare Action” by Bentham’s Bulldog

“Time Sensitive Urgent Animal Welfare Action” by Bentham’s Bulldog

The EATS act—now called the save our bacon act—would make it illegal for states to pass animal welfare laws that apply to products produced out of state. This would gut most state level animal protect...

29 Apr 1min

“Forecasting is Way Overrated, and We Should Stop Funding It” by Marcus Abramovitch 🔸

“Forecasting is Way Overrated, and We Should Stop Funding It” by Marcus Abramovitch 🔸

Summary EA and rationalists got enamoured with forecasting and prediction markets and made them part of the culture, but this hasn’t proven very useful, yet it continues to receive substantial EA fun...

26 Apr 8min

“My lover, effective altruism” by Natalie_Cargill

“My lover, effective altruism” by Natalie_Cargill

Crossposted from Substack. This post is part of a 30-posts-in-30-days ordeal at Inkhaven. All suboptimalities are the result of that. This is part 2, here is part 1 in my EA mini series! On my way to ...

26 Apr 8min

“A Database of Near-Term Interventions for Wild Animals” by Bob Fischer

“A Database of Near-Term Interventions for Wild Animals” by Bob Fischer

The Animal Welfare Department (AWD) at Rethink Priorities supports high-impact strategies to help animals, especially where suffering is vast and largely neglected. Therefore, one of our focus areas i...

24 Apr 17min

“The AI people have been right a lot” by Dylan Matthews

“The AI people have been right a lot” by Dylan Matthews

This post was crossposted from Dylan Matthew's blog by the EA Forum team. The author may not see or reply to comments. Subtitle: Try to keep an open mind as the world gets increasingly wild.The crowd ...

20 Apr 10min

[Linkpost] “The Anthropic IPO Is Coming. We Aren’t Ready for It.” by Sophie Kim

[Linkpost] “The Anthropic IPO Is Coming. We Aren’t Ready for It.” by Sophie Kim

This is a link post. More money is coming than AI safety has ever seen. The capacity to deploy it doesn't exist yet.Image Source: Fortune.com This week, Anthropic announced Claude Mythos Preview– a mo...

17 Apr 15min

“AI Safety’s Biggest Talent Gap Isn’t Researchers. It’s Generalists.” by Topaz, Agustín Covarrubias 🔸, Alexandra Bates, Parv Mahajan, Kairos

“AI Safety’s Biggest Talent Gap Isn’t Researchers. It’s Generalists.” by Topaz, Agustín Covarrubias 🔸, Alexandra Bates, Parv Mahajan, Kairos

This post was cross posted to LessWrong TL;DR: One of the largest talent gaps in AI safety is competent generalists: program managers, fieldbuilders, operators, org leaders, chiefs of staff, founders....

16 Apr 13min

Populärt inom Samhälle & Kultur

podme-dokumentar
gynning-berg
p3-dokumentar
en-mork-historia
aftonbladet-krim
creepypodden-med-jack-werner
skaringer-nessvold
spar
svenska-fall
aftonbladet-daily
killradet
hor-har
kod-katastrof
mardromsgasten
rss-brottsutredarna
flashback-forever
vad-blir-det-for-mord
historiska-brott
rysarpodden
rss-mer-an-bara-morsa