
Separating Vocals in Recorded Music at Spotify with Eric Humphrey - TWiML Talk #98
In today’s show, I sit down with Eric Humphrey, Research Scientist in the music understanding group at Spotify. Eric was at the Deep Learning Summit to give a talk on Advances in Deep Architectures and Methods for Separating Vocals in Recorded Music. We discuss his talk, including how Spotify's large music catalog enables such an experiment to even take place, the methods they use to train algorithms to isolate and remove vocals from music, and how architectures like U-Net and Pix2Pix come into play when building his algorithms. We also hit on the idea of “creative AI,” Spotify’s attempt at understanding music content at scale, optical music recognition, and more. This show is part of a series of shows recorded at the RE•WORK Deep Learning Summit in Montreal back in October. This was a great event and, in fact, their next event, the Deep Learning Summit San Francisco is right around the corner on January 25th and 26th, and will feature more leading researchers and technologists like the ones you’ll hear here on the show this week, including Ian Goodfellow of Google Brain, Daphne Koller of Calico Labs, and more! Definitely check it out and use the code TWIMLAI for 20% off of registration. The notes for this show can be found at twimlai.com/talk/98
19 Tammi 201827min

Accelerating Deep Learning with Mixed Precision Arithmetic with Greg Diamos - TWiML Talk #97
In this show I speak with Greg Diamos, senior computer systems researcher at Baidu. Greg joined me before his talk at the Deep Learning Summit, where he spoke on “The Next Generation of AI Chips.” Greg’s talk focused on some work his team was involved in that accelerates deep learning training by using mixed 16-bit and 32-bit floating point arithmetic. We cover a ton of interesting ground in this conversation, and if you’re interested in systems level thinking around scaling and accelerating deep learning, you’re really going to like this one. And of course, if you like this one, you’re also going to like TWiML Talk #14 with Greg’s former colleague, Shubho Sengupta, which covers a bunch of related topics. This show is part of a series of shows recorded at the RE•WORK Deep Learning Summit in Montreal back in October. This was a great event and, in fact, their next event, the Deep Learning Summit San Francisco is right around the corner on January 25th and 26th, and will feature more leading researchers and technologists like the ones you’ll hear here on the show this week, including Ian Goodfellow of Google Brain, Daphne Koller of Calico Labs, and more! Definitely check it out and use the code TWIMLAI for 20% off of registration.
17 Tammi 201839min

Composing Graphical Models With Neural Networks with David Duvenaud - TWiML Talk #96
In this episode, we hear from David Duvenaud, assistant professor in the Computer Science and Statistics departments at the University of Toronto. David joined me after his talk at the Deep Learning Summit on “Composing Graphical Models With Neural Networks for Structured Representations and Fast Inference.” In our conversation, we discuss the generalized modeling and inference framework that David and his team have created, which combines the strengths of both probabilistic graphical models and deep learning methods. He gives us a walkthrough of his use case which is to automatically segment and categorize mouse behavior from raw video, and we discuss how the framework is applied here and for other use cases. We also discuss some of the differences between the frequentist and bayesian statistical approaches. The notes for this show can be found at twimlai.com/talk/96
15 Tammi 201835min

Embedded Deep Learning at Deep Vision with Siddha Ganju - TWiML Talk #95
In this episode we hear from Siddha Ganju, data scientist at computer vision startup Deep Vision. Siddha joined me at the AI Conference a while back to chat about the challenges of developing deep learning applications “at the edge,” i.e. those targeting compute- and power-constrained environments.In our conversation, Siddha provides an overview of Deep Vision’s embedded processor, which is optimized for ultra-low power requirements, and we dig into the data processing pipeline and network architecture process she uses to support sophisticated models in embedded devices. We dig into the specific the hardware and software capabilities and restrictions typical of edge devices and how she utilizes techniques like model pruning and compression to create embedded models that deliver needed performance levels in resource constrained environments, and discuss use cases such as facial recognition, scene description and activity recognition. Siddha's research interests also include natural language processing and visual question answering, and we spend some time discussing the latter as well.
12 Tammi 201834min

Neuroevolution: Evolving Novel Neural Network Architectures with Kenneth Stanley - TWiML Talk #94
Today, I'm joined by Kenneth Stanley, Professor in the Department of Computer Science at the University of Central Florida and senior research scientist at Uber AI Labs. Kenneth studied under TWiML Talk #47 guest Risto Miikkulainen at UT Austin, and joined Uber AI Labs after Geometric Intelligence, the company he co-founded with Gary Marcus and others, was acquired in late 2016. Kenneth’s research focus is what he calls Neuroevolution, applies the idea of genetic algorithms to the challenge of evolving neural network architectures. In this conversation, we discuss the Neuroevolution of Augmenting Topologies (or NEAT) paper that Kenneth authored along with Risto, which won the 2017 International Society for Artificial Life’s Award for Outstanding Paper of the Decade 2002 - 2012. We also cover some of the extensions to that approach he’s created since, including, HyperNEAT, which can efficiently evolve very large networks with connectivity patterns that look more like those of the human and that are generally much larger than what prior approaches to neural learning could produce, and novelty search, an approach which unlike most evolutionary algorithms has no defined objective, but rather simply searches for novel behaviors. We also cover concepts like “Complexification” and “Deception”, biology vs computation including differences and similarities, and some of his other work including his book, and NERO, a video game complete with Real-time Neuroevolution. This is a meaty “Nerd Alert” interview that I think you’ll really enjoy.
11 Tammi 201845min

A Quantum Computing Primer and Implications for AI with Davide Venturelli - TWiML Talk #93
Today, I'm joined by Davide Venturelli, science operations manager and quantum computing team lead for the Universities Space Research Association’s Institute for Advanced Computer Science at NASA Ames. Davide joined me backstage at the NYU Future Labs AI Summit a while back to give me some insight into a topic that I’ve been curious about for some time now, quantum computing. We kick off our discussion about the core ideas behind quantum computing, including what it is, how it’s applied and the ways it relates to computing as we know it today. We discuss the practical state of quantum computers and what their capabilities are, and the kinds of things you can do with them. And of course, we explore the intersection between AI and quantum computing, how quantum computing may one day accelerate machine learning, and how interested listeners can get started down the quantum rabbit hole. The notes for this show can be found at twimlai.com/talk/93
8 Tammi 201834min

Learning State Representations with Yael Niv - TWiML Talk #92
This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests. In this episode I speak with Yael Niv, professor of neuroscience and psychology at Princeton University. Yael joined me after her invited talk on “Learning State Representations.” In this interview Yael and I explore the relationship between neuroscience and machine learning. In particular, we discusses the importance of state representations in human learning, some of her experimental results in this area, and how a better understanding of representation learning can lead to insights into machine learning problems such as reinforcement and transfer learning. Did I mention this was a nerd alert show? I really enjoyed this interview and I know you will too. Be sure to send over any thoughts or feedback via the show notes page at twimlai.com/talk/92.
22 Joulu 201747min

Philosophy of Intelligence with Matthew Crosby - TWiML Talk #91
This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests.This time around i'm joined by Matthew Crosby, a researcher at Imperial College London, working on the Kinds of Intelligence Project. Matthew joined me after the NIPS Symposium of the same name, an event that brought researchers from a variety of disciplines together towards three aims: a broader perspective of the possible types of intelligence beyond human intelligence, better measurements of intelligence, and a more purposeful analysis of where progress should be made in AI to best benefit society. Matthew’s research explores intelligence from a philosophical perspective, exploring ideas like predictive processing and controlled hallucination, and how these theories of intelligence impact the way we approach creating artificial intelligence. This was a very interesting conversation, i'm sure you’ll enjoy.
21 Joulu 201729min





















