Building an Immune System for AI Generated Software with Animesh Koratana - #746

Building an Immune System for AI Generated Software with Animesh Koratana - #746

Today, we're joined by Animesh Koratana, founder and CEO of PlayerZero to discuss his team’s approach to making agentic and AI-assisted coding tools production-ready at scale. Animesh explains how rapid advances in AI-assisted coding have created an “asymmetry” where the speed of code output outpaces the maturity of processes for maintenance and support. We explore PlayerZero’s debugging and code verification platform, which uses code simulations to build a "memory bank" of past bugs and leverages an ensemble of LLMs and agents to proactively simulate and verify changes, predicting potential failures. Animesh also unpacks the underlying technology, including a semantic graph that analyzes code bases, ticketing systems, and telemetry to trace and reason through complex systems, test hypotheses, and apply reinforcement learning techniques to create an “immune system” for software. Finally, Animesh shares his perspective on the future of the software development lifecycle (SDLC), rethinking organizational workflows, and ensuring security as AI-driven tools continue to mature. The complete show notes for this episode can be found at https://twimlai.com/go/746.

Jaksot(765)

Learning Visiolinguistic Representations with ViLBERT w/ Stefan Lee - #358

Learning Visiolinguistic Representations with ViLBERT w/ Stefan Lee - #358

Today we’re joined by Stefan Lee, an assistant professor at Oregon State University. In our conversation, we focus on his paper ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks. We discuss the development and training process for this model, the adaptation of the training process to incorporate additional visual information to BERT models, where this research leads from the perspective of integration between visual and language tasks.

18 Maalis 202027min

Upside-Down Reinforcement Learning with Jürgen Schmidhuber - #357

Upside-Down Reinforcement Learning with Jürgen Schmidhuber - #357

Today we’re joined by Jürgen Schmidhuber, Co-Founder and Chief Scientist of NNAISENSE, the Scientific Director at IDSIA, as well as a Professor of AI at USI and SUPSI in Switzerland. Jürgen’s lab is well known for creating the Long Short-Term Memory (LSTM) network, and in this conversation, we discuss some of the recent research coming out of his lab, namely Upside-Down Reinforcement Learning.

16 Maalis 202034min

SLIDE: Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning with Beidi Chen - #356

SLIDE: Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning with Beidi Chen - #356

Beidi Chen is part of the team that developed a cheaper, algorithmic, CPU alternative to state-of-the-art GPU machines. They presented their findings at NeurIPS 2019 and have since gained a lot of attention for their paper, SLIDE: In Defense of Smart Algorithms Over Hardware Acceleration for Large-Scale Deep Learning Systems. Beidi shares how the team took a new look at deep learning with the case of extreme classification by turning it into a search problem and using locality-sensitive hashing.

12 Maalis 202031min

Advancements in Machine Learning with Sergey Levine - #355

Advancements in Machine Learning with Sergey Levine - #355

Today we're joined by Sergey Levine, an Assistant Professor at UC Berkeley. We last heard from Sergey back in 2017, where we explored Deep Robotic Learning. Sergey and his lab’s recent efforts have been focused on contributing to a future where machines can be “out there in the real world, learning continuously through their own experience.” We caught up with Sergey at NeurIPS 2019, where Sergey and his team presented 12 different papers -- which means a lot of ground to cover!

9 Maalis 202043min

Secrets of a Kaggle Grandmaster with David Odaibo - #354

Secrets of a Kaggle Grandmaster with David Odaibo - #354

Imagine spending years learning ML from the ground up, from its theoretical foundations, but still feeling like you didn’t really know how to apply it. That’s where David Odaibo found himself in 2015, after the second year of his PhD. David’s solution was Kaggle, a popular platform for data science competitions. Fast forward four years, and David is now a Kaggle Grandmaster, the highest designation, with particular accomplishment in computer vision competitions, and co-founder and CTO of Analytical

5 Maalis 202041min

NLP for Mapping Physics Research with Matteo Chinazzi - #353

NLP for Mapping Physics Research with Matteo Chinazzi - #353

Predicting the future of science, particularly physics, is the task that Matteo Chinazzi, an associate research scientist at Northeastern University focused on in his paper Mapping the Physics Research Space: a Machine Learning Approach. In addition to predicting the trajectory of physics research, Matteo is also active in the computational epidemiology field. His work in that area involves building simulators that can model the spread of diseases like Zika or the seasonal flu at a global scale.

2 Maalis 202035min

Metric Elicitation and Robust Distributed Learning with Sanmi Koyejo - #352

Metric Elicitation and Robust Distributed Learning with Sanmi Koyejo - #352

The unfortunate reality is that many of the most commonly used machine learning metrics don't account for the complex trade-offs that come with real-world decision making. This is one of the challenges that Sanmi Koyejo, assistant professor at the University of Illinois, has dedicated his research to address. Sanmi applies his background in cognitive science, probabilistic modeling, and Bayesian inference to pursue his research which focuses broadly on “adaptive and robust machine learning.”

27 Helmi 202056min

High-Dimensional Robust Statistics with Ilias Diakonikolas - #351

High-Dimensional Robust Statistics with Ilias Diakonikolas - #351

Today we’re joined by Ilias Diakonikolas, faculty in the CS department at the University of Wisconsin-Madison, and author of the paper Distribution-Independent PAC Learning of Halfspaces with Massart Noise, recipient of the NeurIPS 2019 Outstanding Paper award. The paper is regarded as the first progress made around distribution-independent learning with noise since the 80s. In our conversation, we explore robustness in ML, problems with corrupt data in high-dimensional settings, and of course, the paper.

24 Helmi 202036min

Suosittua kategoriassa Politiikka ja uutiset

rss-ootsa-kuullut-tasta
aikalisa
ootsa-kuullut-tasta-2
rss-podme-livebox
politiikan-puskaradio
rss-vaalirankkurit-podcast
otetaan-yhdet
the-ulkopolitist
et-sa-noin-voi-sanoo-esittaa
rikosmyytit
rss-kaikki-uusiksi
rss-hyvaa-huomenta-bryssel
rss-raha-talous-ja-politiikka
rss-pallo-keskelle-2
radio-antro
rss-mina-ukkola
rss-kuka-mina-olen
rss-tasta-on-kyse-ivan-puopolo-verkkouutiset
rss-aijat-hopottaa-podcast
rss-polikulaari-humanisti-vastaa-ja-muut-ts-podcastit