Trends in Deep Learning with Jeremy Howard - TWiML Talk #214

Trends in Deep Learning with Jeremy Howard - TWiML Talk #214

In this episode of our AI Rewind series, we’re bringing back one of your favorite guests of the year, Jeremy Howard, founder and researcher at Fast.ai. Jeremy joins us to discuss trends in Deep Learning in 2018 and beyond. We cover many of the papers, tools and techniques that have contributed to making deep learning more accessible than ever to so many developers and data scientists.

Jaksot(777)

Compositional ML and the Future of Software Development with Dillon Erb - #520

Compositional ML and the Future of Software Development with Dillon Erb - #520

Today we’re joined by Dillon Erb, CEO of Paperspace.  If you’re not familiar with Dillon, he joined us about a year ago to discuss Machine Learning as a Software Engineering Discipline; we strongly encourage you to check out that interview as well. In our conversation, we explore the idea of compositional AI, and if it is the next frontier in a string of recent game-changing machine learning developments. We also discuss a source of constant back and forth in the community around the role of notebooks, and why Paperspace made the choice to pivot towards a more traditional engineering code artifact model after building a popular notebook service. Finally, we talk through their newest release Workflows, an automation and build system for ML applications, which Dillon calls their “most ambitious and comprehensive project yet.” The complete show notes for this episode can be found at twimlai.com/go/520.

20 Syys 202141min

Generating SQL Database Queries from Natural Language with Yanshuai Cao - #519

Generating SQL Database Queries from Natural Language with Yanshuai Cao - #519

Today we’re joined by Yanshuai Cao, a senior research team lead at Borealis AI. In our conversation with Yanshuai, we explore his work on Turing, their natural language to SQL engine that allows users to get insights from relational databases without having to write code. We do a bit of compare and contrast with the recently released Codex Model from OpenAI, the role that reasoning plays in solving this problem, and how it is implemented in the model. We also talk through various challenges like data augmentation, the complexity of the queries that Turing can produce, and a paper that explores the explainability of this model. The complete show notes for this episode can be found at twimlai.com/go/519.

16 Syys 202138min

Social Commonsense Reasoning with Yejin Choi - #518

Social Commonsense Reasoning with Yejin Choi - #518

Today we’re joined by Yejin Choi, a professor at the University of Washington. We had the pleasure of catching up with Yejin after her keynote interview at the recent Stanford HAI “Foundational Models” workshop. In our conversation, we explore her work at the intersection of natural language generation and common sense reasoning, including how she defines common sense, and what the current state of the world is for that research. We discuss how this could be used for creative storytelling, how transformers could be applied to these tasks, and we dig into the subfields of physical and social common sense reasoning. Finally, we talk through the future of Yejin’s research and the areas that she sees as most promising going forward.  If you enjoyed this episode, check out our conversation on AI Storytelling Systems with Mark Riedl. The complete show notes for today’s episode can be found at twimlai.com/go/518.

13 Syys 202151min

Deep Reinforcement Learning for Game Testing at EA with Konrad Tollmar - #517

Deep Reinforcement Learning for Game Testing at EA with Konrad Tollmar - #517

Today we’re joined by Konrad Tollmar, research director at Electronic Arts and an associate professor at KTH.  In our conversation, we explore his role as the lead of EA’s applied research team SEED and the ways that they’re applying ML/AI across popular franchises like Apex Legends, Madden, and FIFA. We break down a few papers focused on the application of ML to game testing, discussing why deep reinforcement learning is at the top of their research agenda, the differences between training atari games and modern 3D games, using CNNs to detect glitches in games, and of course, Konrad gives us his outlook on the future of ML for games training. The complete show notes for this episode can be found at twimlai.com/go/517.

9 Syys 202140min

Exploring AI 2041 with Kai-Fu Lee - #516

Exploring AI 2041 with Kai-Fu Lee - #516

Today we’re joined by Kai-Fu Lee, chairman and CEO of Sinovation Ventures and author of AI 2041: Ten Visions for Our Future.  In AI 2041, Kai-Fu and co-author Chen Qiufan tell the story of how AI could shape our future through a series of 10 “scientific fiction” short stories. In our conversation with Kai-Fu, we explore why he chose 20 years as the time horizon for these stories, and dig into a few of the stories in more detail. We explore the potential for level 5 autonomous driving and what effect that will have on both established and developing nations, the potential outcomes when dealing with job displacement, and his perspective on how the book will be received. We also discuss the potential consequences of autonomous weapons, if we should actually worry about singularity or superintelligence, and the evolution of regulations around AI in 20 years. We’d love to hear from you! What are your thoughts on any of the stories we discuss in the interview? Will you be checking this book out? Let us know in the comments on the show notes page at twimlai.com/go/516.

6 Syys 202147min

Advancing Robotic Brains and Bodies with Daniela Rus - #515

Advancing Robotic Brains and Bodies with Daniela Rus - #515

Today we’re joined by Daniela Rus, director of CSAIL & Deputy Dean of Research at MIT.  In our conversation with Daniela, we explore the history of CSAIL, her role as director of one of the most prestigious computer science labs in the world, how she defines robots, and her take on the current AI for robotics landscape. We also discuss some of her recent research interests including soft robotics, adaptive control in autonomous vehicles, and a mini surgeon robot made with sausage casing(?!).  The complete show notes for this episode can be found at twimlai.com/go/515.

2 Syys 202145min

Neural Synthesis of Binaural Speech From Mono Audio with Alexander Richard - #514

Neural Synthesis of Binaural Speech From Mono Audio with Alexander Richard - #514

Today we’re joined by Alexander Richard, a research scientist at Facebook Reality Labs, and recipient of the ICLR Best Paper Award for his paper “Neural Synthesis of Binaural Speech From Mono Audio.”  We begin our conversation with a look into the charter of Facebook Reality Labs, and Alex’s specific Codec Avatar project, where they’re developing AR/VR for social telepresence (applications like this come to mind). Of course, we dig into the aforementioned paper, discussing the difficulty in improving the quality of audio and the role of dynamic time warping, as well as the challenges of creating this model. Finally, Alex shares his thoughts on 3D rendering for audio, and other future research directions.  The complete show notes for this episode can be found at twimlai.com/go/514.

30 Elo 202146min

Using Brain Imaging to Improve Neural Networks with Alona Fyshe - #513

Using Brain Imaging to Improve Neural Networks with Alona Fyshe - #513

Today we’re joined by Alona Fyshe, an assistant professor at the University of Alberta.  We caught up with Alona on the heels of an interesting panel discussion that she participated in, centered around improving AI systems using research about brain activity. In our conversation, we explore the multiple types of brain images that are used in this research, what representations look like in these images, and how we can improve language models without knowing explicitly how the brain understands the language. We also discuss similar experiments that have incorporated vision, the relationship between computer vision models and the representations that language models create, and future projects like applying a reinforcement learning framework to improve language generation. The complete show notes for this episode can be found at twimlai.com/go/513.

26 Elo 202136min

Suosittua kategoriassa Politiikka ja uutiset

rss-ootsa-kuullut-tasta
aikalisa
tervo-halme
ootsa-kuullut-tasta-2
politiikan-puskaradio
et-sa-noin-voi-sanoo-esittaa
rss-vaalirankkurit-podcast
otetaan-yhdet
rss-podme-livebox
aihe
viisupodi
linda-maria
rikosmyytit
rss-hyvaa-huomenta-bryssel
rss-50100-podcast
rss-kuka-mina-olen
rss-tasta-on-kyse-ivan-puopolo-verkkouutiset
radio-antro
rss-kaikki-paskaksi-ystavat
rss-tekoalyfoorumi