
Learning Visiolinguistic Representations with ViLBERT w/ Stefan Lee - #358
Today we’re joined by Stefan Lee, an assistant professor at Oregon State University. In our conversation, we focus on his paper ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks. We discuss the development and training process for this model, the adaptation of the training process to incorporate additional visual information to BERT models, where this research leads from the perspective of integration between visual and language tasks.
18 Mars 202027min

Upside-Down Reinforcement Learning with Jürgen Schmidhuber - #357
Today we’re joined by Jürgen Schmidhuber, Co-Founder and Chief Scientist of NNAISENSE, the Scientific Director at IDSIA, as well as a Professor of AI at USI and SUPSI in Switzerland. Jürgen’s lab is well known for creating the Long Short-Term Memory (LSTM) network, and in this conversation, we discuss some of the recent research coming out of his lab, namely Upside-Down Reinforcement Learning.
16 Mars 202034min

SLIDE: Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning with Beidi Chen - #356
Beidi Chen is part of the team that developed a cheaper, algorithmic, CPU alternative to state-of-the-art GPU machines. They presented their findings at NeurIPS 2019 and have since gained a lot of attention for their paper, SLIDE: In Defense of Smart Algorithms Over Hardware Acceleration for Large-Scale Deep Learning Systems. Beidi shares how the team took a new look at deep learning with the case of extreme classification by turning it into a search problem and using locality-sensitive hashing.
12 Mars 202031min

Advancements in Machine Learning with Sergey Levine - #355
Today we're joined by Sergey Levine, an Assistant Professor at UC Berkeley. We last heard from Sergey back in 2017, where we explored Deep Robotic Learning. Sergey and his lab’s recent efforts have been focused on contributing to a future where machines can be “out there in the real world, learning continuously through their own experience.” We caught up with Sergey at NeurIPS 2019, where Sergey and his team presented 12 different papers -- which means a lot of ground to cover!
9 Mars 202043min

Secrets of a Kaggle Grandmaster with David Odaibo - #354
Imagine spending years learning ML from the ground up, from its theoretical foundations, but still feeling like you didn’t really know how to apply it. That’s where David Odaibo found himself in 2015, after the second year of his PhD. David’s solution was Kaggle, a popular platform for data science competitions. Fast forward four years, and David is now a Kaggle Grandmaster, the highest designation, with particular accomplishment in computer vision competitions, and co-founder and CTO of Analytical
5 Mars 202041min

NLP for Mapping Physics Research with Matteo Chinazzi - #353
Predicting the future of science, particularly physics, is the task that Matteo Chinazzi, an associate research scientist at Northeastern University focused on in his paper Mapping the Physics Research Space: a Machine Learning Approach. In addition to predicting the trajectory of physics research, Matteo is also active in the computational epidemiology field. His work in that area involves building simulators that can model the spread of diseases like Zika or the seasonal flu at a global scale.
2 Mars 202035min

Metric Elicitation and Robust Distributed Learning with Sanmi Koyejo - #352
The unfortunate reality is that many of the most commonly used machine learning metrics don't account for the complex trade-offs that come with real-world decision making. This is one of the challenges that Sanmi Koyejo, assistant professor at the University of Illinois, has dedicated his research to address. Sanmi applies his background in cognitive science, probabilistic modeling, and Bayesian inference to pursue his research which focuses broadly on “adaptive and robust machine learning.”
27 Feb 202056min

High-Dimensional Robust Statistics with Ilias Diakonikolas - #351
Today we’re joined by Ilias Diakonikolas, faculty in the CS department at the University of Wisconsin-Madison, and author of the paper Distribution-Independent PAC Learning of Halfspaces with Massart Noise, recipient of the NeurIPS 2019 Outstanding Paper award. The paper is regarded as the first progress made around distribution-independent learning with noise since the 80s. In our conversation, we explore robustness in ML, problems with corrupt data in high-dimensional settings, and of course, the paper.
24 Feb 202036min