
Neural Arithmetic Units & Experiences as an Independent ML Researcher with Andreas Madsen - #382
Today we’re joined by Andreas Madsen, an independent researcher based in Denmark. While we caught up with Andreas to discuss his ICLR spotlight paper, “Neural Arithmetic Units,” we also spend time exploring his experience as an independent researcher, discussing the difficulties of working with limited resources, the importance of finding peers to collaborate with, and tempering expectations of getting papers accepted to conferences -- something that might take a few tries to get right.
11 Kesä 202031min

2020: A Critical Inflection Point for Responsible AI with Rumman Chowdhury - #381
Today we’re joined by Rumman Chowdhury, Managing Director and Global Lead of Responsible AI at Accenture. In our conversation with Rumman, we explored questions like: • Why is now such a critical inflection point in the application of responsible AI? • How should engineers and practitioners think about AI ethics and responsible AI? • Why is AI ethics inherently personal and how can you define your own personal approach? • Is the implementation of AI governance necessarily authoritarian?
8 Kesä 20201h 1min

Panel: Advancing Your Data Science Career During the Pandemic - #380
Today we’re joined by Ana Maria Echeverri, Caroline Chavier, Hilary Mason, and Jacqueline Nolis, our guests for the recent Advancing Your Data Science Career During the Pandemic panel. In this conversation, we explore ways that Data Scientists and ML/AI practitioners can continue to advance their careers despite current challenges. Our panelists provide concrete tips, advice, and direction for those just starting out, those affected by layoffs, and those just wanting to move forward in their careers.
4 Kesä 20201h 7min

On George Floyd, Empathy, and the Road Ahead
Visit twimlai.com/blacklivesmatter for resources to support organizations pushing for social equity like Black Lives Matter, and groups offering relief for those jailed for exercising their rights to peaceful protest.
2 Kesä 20206min

Engineering a Less Artificial Intelligence with Andreas Tolias - #379
Today we’re joined by Andreas Tolias, Professor of Neuroscience at Baylor College of Medicine. We caught up with Andreas to discuss his recent perspective piece, “Engineering a Less Artificial Intelligence,” which explores the shortcomings of state-of-the-art learning algorithms in comparison to the brain. The paper also offers several ideas about how neuroscience can lead the quest for better inductive biases by providing useful constraints on representations and network architecture.
28 Touko 202046min

Rethinking Model Size: Train Large, Then Compress with Joseph Gonzalez - #378
Today we’re joined by Joseph Gonzalez, Assistant Professor in the EECS department at UC Berkeley. In our conversation, we explore Joseph’s paper “Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers,” which looks at compute-efficient training strategies for models. We discuss the two main problems being solved; 1) How can we rapidly iterate on variations in architecture? And 2) If we make models bigger, is it really improving any efficiency?
25 Touko 202052min

The Physics of Data with Alpha Lee - #377
Today we’re joined by Alpha Lee, Winton Advanced Fellow in the Department of Physics at the University of Cambridge. Our conversation centers around Alpha’s research which can be broken down into three main categories: data-driven drug discovery, material discovery, and physical analysis of machine learning. We discuss the similarities and differences between drug discovery and material science, his startup, PostEra which offers medicinal chemistry as a service powered by machine learning, and much more
21 Touko 202033min

Is Linguistics Missing from NLP Research? w/ Emily M. Bender - #376 🦜
Today we’re joined by Emily M. Bender, Professor of Linguistics at the University of Washington. Our discussion covers a lot of ground, but centers on the question, "Is Linguistics Missing from NLP Research?" We explore if we would be making more progress, on more solid foundations, if more linguists were involved in NLP research, or is the progress we're making (e.g. with deep learning models like Transformers) just fine?
18 Touko 202052min





















