Deep Gradient Compression for Distributed Training with Song Han - TWiML Talk #146

Deep Gradient Compression for Distributed Training with Song Han - TWiML Talk #146

On today’s show I chat with Song Han, assistant professor in MIT’s EECS department, about his research on Deep Gradient Compression. In our conversation, we explore the challenge of distributed training for deep neural networks and the idea of compressing the gradient exchange to allow it to be done more efficiently. Song details the evolution of distributed training systems based on this idea, and provides a few examples of centralized and decentralized distributed training architectures such as Uber’s Horovod, as well as the approaches native to Pytorch and Tensorflow. Song also addresses potential issues that arise when considering distributed training, such as loss of accuracy and generalizability, and much more. The notes for this show can be found at twimlai.com/talk/146.

Suosittua kategoriassa Politiikka ja uutiset

rss-ootsa-kuullut-tasta
aikalisa
tervo-halme
ootsa-kuullut-tasta-2
politiikan-puskaradio
rss-podme-livebox
et-sa-noin-voi-sanoo-esittaa
rss-vaalirankkurit-podcast
otetaan-yhdet
politbyroo
aihe
rikosmyytit
rss-terveisia-seelannista
radio-antro
rss-lets-talk-about-hair
rss-50100-podcast
rss-kuka-mina-olen
rss-sanna-ukkola-show-verkkouutiset
rss-tasta-on-kyse-ivan-puopolo-verkkouutiset