AGI Can Be Safe
Data Skeptic26 Juni 2023

AGI Can Be Safe

We are joined by Koen Holtman, an independent AI researcher focusing on AI safety. Koen is the Founder of Holtman Systems Research, a research company based in the Netherlands.

Koen started the conversation with his take on an AI apocalypse in the coming years. He discussed the obedience problem with AI models and the safe form of obedience.

Koen explained the concept of Markov Decision Process (MDP) and how it is used to build machine learning models.

Koen spoke about the problem of AGIs not being able to allow changing their utility function after the model is deployed. He shared another alternative approach to solving the problem. He shared how to engineer AGI systems now and in the future safely. He also spoke about how to implement safety layers on AI models.

Koen discussed the ultimate goal of a safe AI system and how to check that an AI system is indeed safe. He discussed the intersection between large language Models (LLMs) and MDPs. He shared the key ingredients to scale the current AI implementations.

Det här avsnittet är hämtat från ett öppet RSS-flöde och publiceras inte av Podme. Det kan innehålla reklam.

Avsnitt(601)

Student Spotlight: Aaron Payne, Data Analyst

Student Spotlight: Aaron Payne, Data Analyst

Aaron Payne, an MBA student at Georgia Tech studying business analytics and a Senior Insights Analyst at Chick-fil-A, joins Kyle Polich to talk about turning analytics into decisions that matter. They...

1 Maj 25min

The Future is Agentic in Recommender Systems

The Future is Agentic in Recommender Systems

Kyle Polich sits down with Yashar Deldjoo, research scientist and Associate Professor at the Polytechnic University of Bari, to explore how recommender systems have evolved and why trustworthiness mat...

25 Apr 49min

Book Ratings and Recommendations

Book Ratings and Recommendations

Goodreads star ratings can be misleading as measures of "book quality," and research from Hannes Rosenbusch suggests that for many professionally published books, differences between readers often mat...

27 Mars 39min

Disentanglement and Interpretability in Recommender Systems

Disentanglement and Interpretability in Recommender Systems

Ervin Dervishaj, a PhD student at the University of Copenhagen, discusses his research on disentangled representation learning in recommender systems, finding that while disentanglement strongly corre...

10 Mars 30min

Collective Altruism in Recommender Systems

Collective Altruism in Recommender Systems

Ekaterina (Kat) Fedorova from MIT EECS joins us to discuss strategic learning in recommender systems—what happens when users collectively coordinate to game recommendation algorithms. Kat's research r...

27 Feb 54min

Niche vs Mainstream

Niche vs Mainstream

Anas Buhayh discusses multi-stakeholder fairness in recommender systems and the S'mores framework—a simulation allowing users to choose between mainstream and niche algorithms. His research shows spec...

18 Feb 34min

Healthy Friction in Job Recommender Systems

Healthy Friction in Job Recommender Systems

In this episode, host Kyle Polich speaks with Roan Schellingerhout, a fourth-year PhD student at Maastricht University, about explainable multi-stakeholder recommender systems for job recruitment. Roa...

2 Feb 26min

Fairness in PCA-Based Recommenders

Fairness in PCA-Based Recommenders

In this episode, we explore the fascinating world of recommender systems and algorithmic fairness with David Liu, Assistant Research Professor at Cornell University's Center for Data Science for Enter...

26 Jan 49min

Populärt inom Vetenskap

p3-dystopia
dumma-manniskor
allt-du-velat-veta
kapitalet-en-podd-om-ekonomi
sexet
rss-ufobortom-rimligt-tvivel
rss-vetenskapsradion
svd-nyhetsartiklar
medicinvetarna
rss-vetenskapsradion-2
det-morka-psyket
ufo-sverige
rss-spraket
hacka-livet
paranormalt-med-caroline-giertz
halsorevolutionen
barnpsykologerna
ideer-som-forandrar-varlden
rss-klotet
pojkmottagningen