Geometry-Aware Neural Rendering with Josh Tobin - #360

Geometry-Aware Neural Rendering with Josh Tobin - #360

Today we’re joined by Josh Tobin, Co-Organizer of the machine learning training program Full Stack Deep Learning. We had the pleasure of sitting down with Josh prior to his presentation of his paper Geometry-Aware Neural Rendering at NeurIPS. Josh's goal is to develop implicit scene understanding, building upon Deepmind's Neural scene representation and rendering work. We discuss challenges, the various datasets used to train his model, and the similarities between VAE training and his process, and mor

Jaksot(778)

Accelerating Distributed AI Applications at Qualcomm with Ziad Asghar - #489

Accelerating Distributed AI Applications at Qualcomm with Ziad Asghar - #489

Today we’re joined by Ziad Asghar, vice president of product management for snapdragon technologies & roadmap at Qualcomm Technologies.  We begin our conversation with Ziad exploring the symbiosis between 5G and AI and what is enabling developers to take full advantage of AI on mobile devices. We also discuss the balance of product evolution and incorporating research concepts, and the evolution of their hardware infrastructure Cloud AI 100, their role in the deployment of Ingenuity, the robotic helicopter that operated on Mars just last year.  Finally, we talk about specialization in building IoT applications like autonomous vehicles and smart cities, the degree to which federated learning is being deployed across the industry, and the importance of privacy and security of personal data.  The complete show notes can be found at https://twimlai.com/go/489.

3 Kesä 202139min

Buy AND Build for Production Machine Learning with Nir Bar-Lev - #488

Buy AND Build for Production Machine Learning with Nir Bar-Lev - #488

Today we’re joined by Nir Bar-Lev, co-founder and CEO of ClearML. In our conversation with Nir, we explore how his view of the wide vs deep machine learning platforms paradox has changed and evolved over time, how companies should think about building vs buying and integration, and his thoughts on why experiment management has become an automatic buy, be it open source or otherwise.  We also discuss the disadvantages of using a cloud vendor as opposed to a software-based approach, the balance between mlops and data science when addressing issues of overfitting, and how ClearML is applying techniques like federated machine learning and transfer learning to their solutions. The complete show notes for this episode can be found at https://twimlai.com/go/488.

31 Touko 202143min

Applied AI Research at AWS with Alex Smola - #487

Applied AI Research at AWS with Alex Smola - #487

Today we’re joined by Alex Smola, Vice President and Distinguished Scientist at AWS AI. We had the pleasure to catch up with Alex prior to the upcoming AWS Machine Learning Summit, and we covered a TON of ground in the conversation. We start by focusing on his research in the domain of deep learning on graphs, including a few examples showcasing its function, and an interesting discussion around the relationship between large language models and graphs. Next up, we discuss their focus on AutoML research and how it's the key to lowering the barrier of entry for machine learning research. Alex also shares a bit about his work on causality and causal modeling, introducing us to the concept of Granger causality. Finally, we talk about the aforementioned ML Summit, its exponential growth since its inception a few years ago, and what speakers he's most excited about hearing from. The complete show notes for this episode can be found at https://twimlai.com/go/487.

27 Touko 202155min

Causal Models in Practice at Lyft with Sean Taylor - #486

Causal Models in Practice at Lyft with Sean Taylor - #486

Today we’re joined by Sean Taylor, Staff Data Scientist at Lyft Rideshare Labs. We cover a lot of ground with Sean, starting with his recent decision to step away from his previous role as the lab director to take a more hands-on role, and what inspired that change. We also discuss his research at Rideshare Labs, where they take a more “moonshot” approach to solving the typical problems like forecasting and planning, marketplace experimentation, and decision making, and how his statistical approach manifests itself in his work. Finally, we spend quite a bit of time exploring the role of causality in the work at rideshare labs, including how systems like the aforementioned forecasting system are designed around causal models, if driving model development is more effective using business metrics, challenges associated with hierarchical modeling, and much much more. The complete show notes for this episode can be found at twimlai.com/go/486.

24 Touko 202140min

Using AI to Map the Human Immune System w/ Jabran Zahid - #485

Using AI to Map the Human Immune System w/ Jabran Zahid - #485

Today we’re joined by Jabran Zahid, a Senior Researcher at Microsoft Research. In our conversation with Jabran, we explore their recent endeavor into the complete mapping of which T-cells bind to which antigens through the Antigen Map Project. We discuss how Jabran’s background in astrophysics and cosmology has translated to his current work in immunology and biology, the origins of the antigen map, the biological and how the focus was changed by the emergence of the coronavirus pandemic. We talk through the biological advancements, and the challenges of using machine learning in this setting, some of the more advanced ML techniques that they’ve tried that have not panned out (as of yet), the path forward for the antigen map to make a broader impact, and much more. The complete show notes for this episode can be found at twimlai.com/go/485.

20 Touko 202141min

Learning Long-Time Dependencies with RNNs w/ Konstantin Rusch - #484

Learning Long-Time Dependencies with RNNs w/ Konstantin Rusch - #484

Today we conclude our 2021 ICLR coverage joined by Konstantin Rusch, a PhD Student at ETH Zurich. In our conversation with Konstantin, we explore his recent papers, titled coRNN and uniCORNN respectively, which focus on a novel architecture of recurrent neural networks for learning long-time dependencies. We explore the inspiration he drew from neuroscience when tackling this problem, how the performance results compared to networks like LSTMs and others that have been proven to work on this problem and Konstantin’s future research goals. The complete show notes for this episode can be found at twimlai.com/go/484.

17 Touko 202137min

What the Human Brain Can Tell Us About NLP Models with Allyson Ettinger - #483

What the Human Brain Can Tell Us About NLP Models with Allyson Ettinger - #483

Today we continue our ICLR ‘21 series joined by Allyson Ettinger, an Assistant Professor at the University of Chicago.  One of our favorite recurring conversations on the podcast is the two-way street that lies between machine learning and neuroscience, which Allyson explores through the modeling of cognitive processes that pertain to language. In our conversation, we discuss how she approaches assessing the competencies of AI, the value of control of confounding variables in AI research, and how the pattern matching traits of Ml/DL models are not necessarily exclusive to these systems.  Allyson also participated in a recent panel discussion at the ICLR workshop How Can Findings About The Brain Improve AI Systems?, centered around the utility of brain inspiration for developing AI models. We discuss ways in which we can try to more closely simulate the functioning of a brain, where her work fits into the analysis and interpretability area of NLP, and much more! The complete show notes for this episode can be found at twimlai.com/go/483.

13 Touko 202138min

Probabilistic Numeric CNNs with Roberto Bondesan - #482

Probabilistic Numeric CNNs with Roberto Bondesan - #482

Today we kick off our ICLR 2021 coverage joined by Roberto Bondesan, an AI Researcher at Qualcomm.  In our conversation with Roberto, we explore his paper Probabilistic Numeric Convolutional Neural Networks, which represents features as Gaussian processes, providing a probabilistic description of discretization error. We discuss some of the other work the team at Qualcomm presented at the conference, including a paper called Adaptive Neural Compression, as well as work on Guage Equvariant Mesh CNNs. Finally, we briefly discuss quantum deep learning, and what excites Roberto and his team about the future of their research in combinatorial optimization.   The complete show notes for this episode can be found at https://twimlai.com/go/482

10 Touko 202141min

Suosittua kategoriassa Politiikka ja uutiset

tervo-halme
aikalisa
rss-ootsa-kuullut-tasta
ootsa-kuullut-tasta-2
politiikan-puskaradio
rss-podme-livebox
rss-vaalirankkurit-podcast
rss-kuka-mina-olen
et-sa-noin-voi-sanoo-esittaa
otetaan-yhdet
rikosmyytit
rss-kiina-ilmiot
rss-polikulaari-humanisti-vastaa-ja-muut-ts-podcastit
rss-kaikki-uusiksi
rss-merja-mahkan-rahat
rss-suoraan-asiaan
rss-se-avun-kysymyspodcast
rss-50100-podcast
rss-raha-talous-ja-politiikka
rss-asiastudio