Quantizing Transformers by Helping Attention Heads Do Nothing with Markus Nagel - #663

Quantizing Transformers by Helping Attention Heads Do Nothing with Markus Nagel - #663

Today we’re joined by Markus Nagel, research scientist at Qualcomm AI Research, who helps us kick off our coverage of NeurIPS 2023. In our conversation with Markus, we cover his accepted papers at the conference, along with other work presented by Qualcomm AI Research scientists. Markus’ first paper, Quantizable Transformers: Removing Outliers by Helping Attention Heads Do Nothing, focuses on tackling activation quantization issues introduced by the attention mechanism and how to solve them. We also discuss Pruning vs Quantization: Which is Better?, which focuses on comparing the effectiveness of these two methods in achieving model weight compression. Additional papers discussed focus on topics like using scalarization in multitask and multidomain learning to improve training and inference, using diffusion models for a sequence of state models and actions, applying geometric algebra with equivariance to transformers, and applying a deductive verification of chain of thought reasoning performed by LLMs. The complete show notes for this episode can be found at twimlai.com/go/663.

Jaksot(778)

Suosittua kategoriassa Politiikka ja uutiset

tervo-halme
aikalisa
rss-ootsa-kuullut-tasta
ootsa-kuullut-tasta-2
politiikan-puskaradio
viisupodi
otetaan-yhdet
rikosmyytit
et-sa-noin-voi-sanoo-esittaa
rss-podme-livebox
rss-asiastudio
io-techin-tekniikkapodcast
rss-vaalirankkurit-podcast
rss-kaikki-uusiksi
the-ulkopolitist
rss-kuka-mina-olen
rss-raha-talous-ja-politiikka
rss-tasta-on-kyse-ivan-puopolo-verkkouutiset
radio-antro
rss-tekkipodi