Scalable Differential Privacy for Deep Learning with Nicolas Papernot - TWiML Talk #134

Scalable Differential Privacy for Deep Learning with Nicolas Papernot - TWiML Talk #134

In this episode of our Differential Privacy series, I'm joined by Nicolas Papernot, Google PhD Fellow in Security and graduate student in the department of computer science at Penn State University. Nicolas and I continue this week’s look into differential privacy with a discussion of his recent paper, Semi-supervised Knowledge Transfer for Deep Learning From Private Training Data. In our conversation, Nicolas describes the Private Aggregation of Teacher Ensembles model proposed in this paper, and how it ensures differential privacy in a scalable manner that can be applied to Deep Neural Networks. We also explore one of the interesting side effects of applying differential privacy to machine learning, namely that it inherently resists overfitting, leading to more generalized models. The notes for this show can be found at twimlai.com/talk/134.

Avsnitt(777)

Understanding Deep Neural Nets with Dr. James McCaffrey - TWiML Talk #13

Understanding Deep Neural Nets with Dr. James McCaffrey - TWiML Talk #13

My guest this week is Dr. James McCaffrey, research engineer at Microsoft Research. James and I cover a ton of ground in this conversation, including recurrent neural nets (RNNs), convolutional neural nets (CNNs), long short term memory (LSTM) networks, residual networks (ResNets), generative adversarial networks (GANs), and more. We also discuss neural network architecture and promising alternative approaches such as symbolic computation and particle swarm optimization. The show notes can be found at twimlai.com/talk/13.

3 Mars 20171h 16min

Brendan Frey - Reprogramming the Human Genome with AI - TWiML Talk #12

Brendan Frey - Reprogramming the Human Genome with AI - TWiML Talk #12

My guest this week is Brendan Frey, Professor of Engineering and Medicine at the University of Toronto and Co-Founder and CEO of the startup Deep Genomics. Brendan and I met at the Re-Work Deep Learning Summit in San Francisco last month, where he delivered a great presentation called “Reprogramming the Human Genome: Why AI is Needed.” In this podcast we discuss the application of AI to healthcare. In particular, we dig into how Brendan’s research lab and company are applying machine learning and deep learning to treating and preventing human genetic disorders. The show notes can be found at twimlai.com/talk/12

24 Feb 20171h

Hilary Mason - Building AI Products - TWiML Talk #11

Hilary Mason - Building AI Products - TWiML Talk #11

My guest this time is Hilary Mason. Hilary was one of the first “famous” data scientists. I remember hearing her speak back in 2011 at the Strange Loop conference in St. Louis. At the time she was Chief Scientist for bit.ly. Nowadays she’s running Fast Forward Labs, which helps organizations accelerate their data science and machine intelligence capabilities through a variety of research and consulting offerings. Hilary presented at the O'Reilly AI conference on “practical AI product development” and she shares a lot of wisdom on that topic in our discussion. The show notes can be found at twimlai.com/talk/11.

25 Jan 201717min

Francisco Webber - Statistics vs Semantics for Natural Language Processing - TWiML Talk #10

Francisco Webber - Statistics vs Semantics for Natural Language Processing - TWiML Talk #10

My guest this time is Francisco Webber, founder and General Manager of artificial intelligence startup Cortical.io. Francisco presented at the O’Reilly AI conference on an approach to natural language understanding based on semantic representations of speech. His talk was called “AI is not a matter of strength but of intelligence.” My conversation with Francisco was a bit technical and abstract, but also super interesting. The show notes can be found at twimlai.com/talk/10.

3 Dec 201649min

Pascale Fung - Emotional AI: Teaching Computers Empathy - TWiML Talk #9

Pascale Fung - Emotional AI: Teaching Computers Empathy - TWiML Talk #9

My guest this time is Pascale Fung, professor of electrical & computer engineering at Hong Kong University of Science and Technology. Pascale delivered a presentation at the recent O'Reilly AI conference titled "How to make robots empathetic to human feelings in real time," and I caught up with her after her talk to discuss teaching computers to understand and respond to human emotions. We also spend some time talking about the (information) theoretical foundations of modern approaches to speech understanding. The notes for this show can be found at twimlai.com/talk/9.

8 Nov 201634min

Diogo Almeida - Deep Learning: Modular in Theory, Inflexible in Practice - TWiML Talk #8

Diogo Almeida - Deep Learning: Modular in Theory, Inflexible in Practice - TWiML Talk #8

My guest this time is Diogo Almeida, senior data scientist at healthcare startup Enlitic. Diogo and I met at the O'Reilly AI conference, where he delivered a great presentation on in-the-trenches deep learning titled “Deep Learning: Modular in theory, inflexible in practice,” which we discuss in this interview. Diogo is also a past 1st place Kaggle competition winner, and we spend some time discussing the competition he competed in and the approach he took as well. The notes for this show can be found at twimlai.com/talk/8.

23 Okt 201646min

Carlos Guestrin - Explaining the Predictions of Machine Learning Models - TWiML Talk #7

Carlos Guestrin - Explaining the Predictions of Machine Learning Models - TWiML Talk #7

My guest this time is Carlos Guestrin, the Amazon professor of Machine Learning at the University of Washington. Carlos and I recorded this podcast at a conference, shortly after Apple's acquisition of his company Turi. Our focus for this podcast is the explainability of machine learning algorithms. In particular, we discuss some interesting new research published by his team at U of W. The notes for this show can be found at twimlai.com/talk/7.

9 Okt 201631min

Angie Hugeback - Generating Training Data for Your ML Models - TWiML Talk #6

Angie Hugeback - Generating Training Data for Your ML Models - TWiML Talk #6

My guest this time is Angie Hugeback, who is principal data scientist at Spare5. Spare5 helps customers generate the high-quality labeled training datasets that are so crucial to developing accurate machine learning models. In this show, Angie and I cover a ton of the real-world practicalities of generating training datasets. We talk through the challenges faced by folks that need to label training data, and how to develop a cohesive system for achieving performing the various labeling tasks you’re likely to encounter. We discuss some of the ways that bias can creep into your training data and how to avoid that. And we explore the some of the popular 3rd party options that companies look at for scaling training data production, and how they differ. Spare5 has graciously sponsored this episode; you can learn more about them at spare5.com. The notes for this show can be found at twimlai.com/talk/6.

29 Sep 20161h 1min

Populärt inom Politik & nyheter

svenska-fall
motiv
aftonbladet-krim
p3-krim
fordomspodden
flashback-forever
rss-viva-fotboll
rss-krimstad
aftonbladet-daily
rss-sanning-konsekvens
spar
blenda-2
rss-vad-fan-hande
rss-krimreportrarna
rss-frandfors-horna
dagens-eko
olyckan-inifran
krimmagasinet
rss-expressen-dok
svd-nyhetsartiklar