How AI Predicted the Coronavirus Outbreak with Kamran Khan - #350

How AI Predicted the Coronavirus Outbreak with Kamran Khan - #350

Today we’re joined by Kamran Khan, founder & CEO of BlueDot, and professor of medicine and public health at the University of Toronto. BlueDot has been the recipient of a lot of attention for being the first to publicly warn about the coronavirus that started in Wuhan. How did the company’s system of algorithms and data processing techniques help flag the potential dangers of the disease? In our conversation, Kamran talks us through how the technology works, its limits, and the motivation behind the wor

Avsnitt(775)

Composing Graphical Models With Neural Networks with David Duvenaud - TWiML Talk #96

Composing Graphical Models With Neural Networks with David Duvenaud - TWiML Talk #96

In this episode, we hear from David Duvenaud, assistant professor in the Computer Science and Statistics departments at the University of Toronto. David joined me after his talk at the Deep Learning Summit on “Composing Graphical Models With Neural Networks for Structured Representations and Fast Inference.” In our conversation, we discuss the generalized modeling and inference framework that David and his team have created, which combines the strengths of both probabilistic graphical models and deep learning methods. He gives us a walkthrough of his use case which is to automatically segment and categorize mouse behavior from raw video, and we discuss how the framework is applied here and for other use cases. We also discuss some of the differences between the frequentist and bayesian statistical approaches. The notes for this show can be found at twimlai.com/talk/96

15 Jan 201835min

Embedded Deep Learning at Deep Vision with Siddha Ganju - TWiML Talk #95

Embedded Deep Learning at Deep Vision with Siddha Ganju - TWiML Talk #95

In this episode we hear from Siddha Ganju, data scientist at computer vision startup Deep Vision. Siddha joined me at the AI Conference a while back to chat about the challenges of developing deep learning applications “at the edge,” i.e. those targeting compute- and power-constrained environments.In our conversation, Siddha provides an overview of Deep Vision’s embedded processor, which is optimized for ultra-low power requirements, and we dig into the data processing pipeline and network architecture process she uses to support sophisticated models in embedded devices. We dig into the specific the hardware and software capabilities and restrictions typical of edge devices and how she utilizes techniques like model pruning and compression to create embedded models that deliver needed performance levels in resource constrained environments, and discuss use cases such as facial recognition, scene description and activity recognition. Siddha's research interests also include natural language processing and visual question answering, and we spend some time discussing the latter as well.

12 Jan 201834min

Neuroevolution: Evolving Novel Neural Network Architectures with Kenneth Stanley - TWiML Talk #94

Neuroevolution: Evolving Novel Neural Network Architectures with Kenneth Stanley - TWiML Talk #94

Today, I'm joined by Kenneth Stanley, Professor in the Department of Computer Science at the University of Central Florida and senior research scientist at Uber AI Labs. Kenneth studied under TWiML Talk #47 guest Risto Miikkulainen at UT Austin, and joined Uber AI Labs after Geometric Intelligence, the company he co-founded with Gary Marcus and others, was acquired in late 2016. Kenneth’s research focus is what he calls Neuroevolution, applies the idea of genetic algorithms to the challenge of evolving neural network architectures. In this conversation, we discuss the Neuroevolution of Augmenting Topologies (or NEAT) paper that Kenneth authored along with Risto, which won the 2017 International Society for Artificial Life’s Award for Outstanding Paper of the Decade 2002 - 2012. We also cover some of the extensions to that approach he’s created since, including, HyperNEAT, which can efficiently evolve very large networks with connectivity patterns that look more like those of the human and that are generally much larger than what prior approaches to neural learning could produce, and novelty search, an approach which unlike most evolutionary algorithms has no defined objective, but rather simply searches for novel behaviors. We also cover concepts like “Complexification” and “Deception”, biology vs computation including differences and similarities, and some of his other work including his book, and NERO, a video game complete with Real-time Neuroevolution. This is a meaty “Nerd Alert” interview that I think you’ll really enjoy.

11 Jan 201845min

A Quantum Computing Primer and Implications for AI with Davide Venturelli - TWiML Talk #93

A Quantum Computing Primer and Implications for AI with Davide Venturelli - TWiML Talk #93

Today, I'm joined by Davide Venturelli, science operations manager and quantum computing team lead for the Universities Space Research Association’s Institute for Advanced Computer Science at NASA Ames. Davide joined me backstage at the NYU Future Labs AI Summit a while back to give me some insight into a topic that I’ve been curious about for some time now, quantum computing. We kick off our discussion about the core ideas behind quantum computing, including what it is, how it’s applied and the ways it relates to computing as we know it today. We discuss the practical state of quantum computers and what their capabilities are, and the kinds of things you can do with them. And of course, we explore the intersection between AI and quantum computing, how quantum computing may one day accelerate machine learning, and how interested listeners can get started down the quantum rabbit hole. The notes for this show can be found at twimlai.com/talk/93

8 Jan 201834min

Learning State Representations with Yael Niv - TWiML Talk #92

Learning State Representations with Yael Niv - TWiML Talk #92

This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests. In this episode I speak with Yael Niv, professor of neuroscience and psychology at Princeton University. Yael joined me after her invited talk on “Learning State Representations.” In this interview Yael and I explore the relationship between neuroscience and machine learning. In particular, we discusses the importance of state representations in human learning, some of her experimental results in this area, and how a better understanding of representation learning can lead to insights into machine learning problems such as reinforcement and transfer learning. Did I mention this was a nerd alert show? I really enjoyed this interview and I know you will too. Be sure to send over any thoughts or feedback via the show notes page at twimlai.com/talk/92.

22 Dec 201747min

Philosophy of Intelligence with Matthew Crosby - TWiML Talk #91

Philosophy of Intelligence with Matthew Crosby - TWiML Talk #91

This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests.This time around i'm joined by Matthew Crosby, a researcher at Imperial College London, working on the Kinds of Intelligence Project. Matthew joined me after the NIPS Symposium of the same name, an event that brought researchers from a variety of disciplines together towards three aims: a broader perspective of the possible types of intelligence beyond human intelligence, better measurements of intelligence, and a more purposeful analysis of where progress should be made in AI to best benefit society. Matthew’s research explores intelligence from a philosophical perspective, exploring ideas like predictive processing and controlled hallucination, and how these theories of intelligence impact the way we approach creating artificial intelligence. This was a very interesting conversation, i'm sure you’ll enjoy.

21 Dec 201729min

Geometric Deep Learning with Joan Bruna & Michael Bronstein - TWiML Talk #90

Geometric Deep Learning with Joan Bruna & Michael Bronstein - TWiML Talk #90

This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests. This time around I'm joined by Joan Bruna, Assistant Professor at the Courant Institute of Mathematical Sciences and the Center for Data Science at NYU, and Michael Bronstein, associate professor at Università della Svizzera italiana (Switzerland) and Tel Aviv University. Joan and Michael join me after their tutorial on Geometric Deep Learning on Graphs and Manifolds. In our conversation we dig pretty deeply into the ideas behind geometric deep learning and how we can use it in applications like 3D vision, sensor networks, drug design, biomedicine, and recommendation systems. This is definitely a Nerd Alert show, and one that will get your multi-dimensional neurons firing. Enjoy!

20 Dec 201740min

AI at the NASA Frontier Development Lab with Sara Jennings, Timothy Seabrook and Andres Rodriguez

AI at the NASA Frontier Development Lab with Sara Jennings, Timothy Seabrook and Andres Rodriguez

This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests. In this episode i'm joined by Sara Jennings, Timothy Seabrook and Andres Rodriguez to discuss NASA’s Frontier Development Lab or FDL. The FDL is an intense 8-week applied AI research accelerator, focused on tackling knowledge gaps useful to the space program. In our discussion, Sara, producer at the FDL, provides some insight into its goals and structure. Timothy, a researcher at FDL, describes his involvement with the program, including some of the projects he worked on while on-site. He also provides a look into some of this year’s FDL projects, including Planetary Defense, Solar Storm Prediction, and Lunar Water Location. Last but not least, Andres, Sr. Principal Engineer at Intel's AIPG, joins us to detail Intel’s support of the FDL, and how the various elements of the Intel AI stack supported the FDL research. This is a jam packed conversation, so be sure to check the show notes page at twimlai.com/talk/89 for all the links and tidbits from this episode.

19 Dec 201736min

Populärt inom Politik & nyheter

aftonbladet-krim
svenska-fall
motiv
p3-krim
fordomspodden
blenda-2
rss-viva-fotboll
rss-krimstad
flashback-forever
aftonbladet-daily
rss-sanning-konsekvens
rss-vad-fan-hande
rss-krimreportrarna
rss-frandfors-horna
dagens-eko
sydsvenskan-dok
olyckan-inifran
rss-flodet
rss-svalan-krim
krimmagasinet