Inverse Reinforcement Learning Without RL with Gokul Swamy - #643

Inverse Reinforcement Learning Without RL with Gokul Swamy - #643

Today we’re joined by Gokul Swamy, a Ph.D. Student at the Robotics Institute at Carnegie Mellon University. In the final conversation of our ICML 2023 series, we sat down with Gokul to discuss his accepted papers at the event, leading off with “Inverse Reinforcement Learning without Reinforcement Learning.” In this paper, Gokul explores the challenges and benefits of inverse reinforcement learning, and the potential and advantages it holds for various applications. Next up, we explore the “Complementing a Policy with a Different Observation Space” paper which applies causal inference techniques to accurately estimate sampling balance and make decisions based on limited observed features. Finally, we touched on “Learning Shared Safety Constraints from Multi-task Demonstrations” which centers on learning safety constraints from demonstrations using the inverse reinforcement learning approach. The complete show notes for this episode can be found at twimlai.com/go/643.

Suosittua kategoriassa Politiikka ja uutiset

rss-ootsa-kuullut-tasta
aikalisa
tervo-halme
ootsa-kuullut-tasta-2
politiikan-puskaradio
et-sa-noin-voi-sanoo-esittaa
rss-vaalirankkurit-podcast
rss-podme-livebox
politbyroo
otetaan-yhdet
aihe
rikosmyytit
rss-terveisia-seelannista
radio-antro
rss-50100-podcast
rss-kuka-mina-olen
rss-raha-talous-ja-politiikka
rss-sanna-ukkola-show-verkkouutiset
rss-tasta-on-kyse-ivan-puopolo-verkkouutiset