Privacy and Security for Stable Diffusion and LLMs with Nicholas Carlini - #618

Privacy and Security for Stable Diffusion and LLMs with Nicholas Carlini - #618

Today we’re joined by Nicholas Carlini, a research scientist at Google Brain. Nicholas works at the intersection of machine learning and computer security, and his recent paper “Extracting Training Data from LLMs” has generated quite a buzz within the ML community. In our conversation, we discuss the current state of adversarial machine learning research, the dynamic of dealing with privacy issues in black box vs accessible models, what privacy attacks in vision models like diffusion models look like, and the scale of “memorization” within these models. We also explore Nicholas’ work on data poisoning, which looks to understand what happens if a bad actor can take control of a small fraction of the data that an ML model is trained on. The complete show notes for this episode can be found at twimlai.com/go/618.

Suosittua kategoriassa Politiikka ja uutiset

aikalisa
rss-ootsa-kuullut-tasta
tervo-halme
ootsa-kuullut-tasta-2
politiikan-puskaradio
otetaan-yhdet
rss-podme-livebox
et-sa-noin-voi-sanoo-esittaa
aihe
rss-tasta-on-kyse-ivan-puopolo-verkkouutiset
rss-raha-talous-ja-politiikka
radio-antro
rss-uusi-juttu
rss-kaikki-uusiksi
rss-hyvaa-huomenta-bryssel
popcorn-with-esko
the-ulkopolitist
mtv-uutiset-polloraati
rss-toisten-taskuilla
rss-podcast-podcast-3