Easily Fooling Deep Neural Networks
Data Skeptic16 Jan 2015

Easily Fooling Deep Neural Networks

My guest this week is Anh Nguyen, a PhD student at the University of Wyoming working in the Evolving AI lab. The episode discusses the paper Deep Neural Networks are Easily Fooled [pdf] by Anh Nguyen, Jason Yosinski, and Jeff Clune. It describes a process for creating images that a trained deep neural network will mis-classify. If you have a deep neural network that has been trained to recognize certain types of objects in images, these "fooling" images can be constructed in a way which the network will mis-classify them. To a human observer, these fooling images often have no resemblance whatsoever to the assigned label. Previous work had shown that some images which appear to be unrecognizable white noise images to us can fool a deep neural network. This paper extends the result showing abstract images of shapes and colors, many of which have form (just not the one the network thinks) can also trick the network.

Denne episoden er hentet fra en åpen RSS-feed og er ikke publisert av Podme. Den kan derfor inneholde annonser.

Populært innen Vitenskap

fastlegen
tingenes-tilstand
jss
forskningno
sinnsyn
rekommandert
rss-paradigmepodden
villmarksliv
nevropodden
kvinnehelsepodden
rss-zahid-ali-hjelper-deg
liberal-halvtime
tidlose-historier
fjellsportpodden
nordnorsk-historie
pod-britannia
rss-inn-til-kjernen-med-sunniva-rose
rss-rekommandert
rss-overskuddsliv
tomprat-med-gunnar-tjomlid