Learning to Ponder: Memory in Deep Neural Networks with Andrea Banino - #528

Learning to Ponder: Memory in Deep Neural Networks with Andrea Banino - #528

Today we’re joined by Andrea Banino, a research scientist at DeepMind. In our conversation with Andrea, we explore his interest in artificial general intelligence by way of episodic memory, the relationship between memory and intelligence, the challenges of applying memory in the context of neural networks, and how to overcome problems of generalization. We also discuss his work on the PonderNet, a neural network that “budgets” its computational investment in solving a problem, according to the inherent complexity of the problem, the impetus and goals of this research, and how PonderNet connects to his memory research. The complete show notes for this episode can be found at twimlai.com/go/528.

Jaksot(778)

Engineering a Less Artificial Intelligence with Andreas Tolias - #379

Engineering a Less Artificial Intelligence with Andreas Tolias - #379

Today we’re joined by Andreas Tolias, Professor of Neuroscience at Baylor College of Medicine. We caught up with Andreas to discuss his recent perspective piece, “Engineering a Less Artificial Intelligence,” which explores the shortcomings of state-of-the-art learning algorithms in comparison to the brain. The paper also offers several ideas about how neuroscience can lead the quest for better inductive biases by providing useful constraints on representations and network architecture.

28 Touko 202046min

Rethinking Model Size: Train Large, Then Compress with Joseph Gonzalez - #378

Rethinking Model Size: Train Large, Then Compress with Joseph Gonzalez - #378

Today we’re joined by Joseph Gonzalez, Assistant Professor in the EECS department at UC Berkeley. In our conversation, we explore Joseph’s paper “Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers,” which looks at compute-efficient training strategies for models. We discuss the two main problems being solved; 1) How can we rapidly iterate on variations in architecture? And 2) If we make models bigger, is it really improving any efficiency?

25 Touko 202052min

The Physics of Data with Alpha Lee - #377

The Physics of Data with Alpha Lee - #377

Today we’re joined by Alpha Lee, Winton Advanced Fellow in the Department of Physics at the University of Cambridge. Our conversation centers around Alpha’s research which can be broken down into three main categories: data-driven drug discovery, material discovery, and physical analysis of machine learning. We discuss the similarities and differences between drug discovery and material science, his startup, PostEra which offers medicinal chemistry as a service powered by machine learning, and much more

21 Touko 202033min

Is Linguistics Missing from NLP Research? w/ Emily M. Bender - #376 🦜

Is Linguistics Missing from NLP Research? w/ Emily M. Bender - #376 🦜

Today we’re joined by Emily M. Bender, Professor of Linguistics at the University of Washington. Our discussion covers a lot of ground, but centers on the question, "Is Linguistics Missing from NLP Research?" We explore if we would be making more progress, on more solid foundations, if more linguists were involved in NLP research, or is the progress we're making (e.g. with deep learning models like Transformers) just fine?

18 Touko 202052min

Disrupting DeepFakes: Adversarial Attacks Against Conditional Image Translation Networks with Nataniel Ruiz - #375

Disrupting DeepFakes: Adversarial Attacks Against Conditional Image Translation Networks with Nataniel Ruiz - #375

Today we’re joined by Nataniel Ruiz, a PhD Student at Boston University. We caught up with Nataniel to discuss his paper “Disrupting DeepFakes: Adversarial Attacks Against Conditional Image Translation Networks and Facial Manipulation Systems.” In our conversation, we discuss the concept of this work, as well as some of the challenging parts of implementing this work, potential scenarios in which this could be deployed, and the broader contributions that went into this work.

14 Touko 202042min

Understanding the COVID-19 Data Quality Problem with Sherri Rose - #374

Understanding the COVID-19 Data Quality Problem with Sherri Rose - #374

Today we’re joined by Sherri Rose, Associate Professor at Harvard Medical School. We cover a lot of ground in our conversation, including the intersection of her research with the current COVID-19 pandemic, the importance of quality in datasets and rigor when publishing papers, and the pitfalls of using causal inference. We also touch on Sherri’s work in algorithmic fairness, the shift she’s seen in fairness conferences covering these issues in relation to healthcare research, and a few recent pape

11 Touko 202044min

The Whys and Hows of Managing Machine Learning Artifacts with Lukas Biewald - #373

The Whys and Hows of Managing Machine Learning Artifacts with Lukas Biewald - #373

Today we’re joined by Lukas Biewald, founder and CEO of Weights & Biases, to discuss their new tool Artifacts, an end to end pipeline tracker. In our conversation, we explore Artifacts’ place in the broader machine learning tooling ecosystem through the lens of our eBook “The definitive guide to ML Platforms” and how it fits with the W&B model management platform. We discuss also discuss what exactly “Artifacts” are, what the tool is tracking, and take a look at the onboarding process for users.

7 Touko 202054min

Language Modeling and Protein Generation at Salesforce with Richard Socher - #372

Language Modeling and Protein Generation at Salesforce with Richard Socher - #372

Today we’re joined Richard Socher, Chief Scientist and Executive VP at Salesforce. Richard and his team have published quite a few great projects lately, including CTRL: A Conditional Transformer Language Model for Controllable Generation, and ProGen, an AI Protein Generator, both of which we cover in-depth in this conversation. We also explore the balancing act between investments, product requirement research and otherwise at a large product-focused company like Salesforce.

4 Touko 202042min

Suosittua kategoriassa Politiikka ja uutiset

aikalisa
rss-ootsa-kuullut-tasta
ootsa-kuullut-tasta-2
tervo-halme
rss-vaalirankkurit-podcast
et-sa-noin-voi-sanoo-esittaa
rss-kuka-mina-olen
rss-podme-livebox
politiikan-puskaradio
otetaan-yhdet
rikosmyytit
rss-merja-mahkan-rahat
aihe
viisupodi
rss-hyvaa-huomenta-bryssel
rss-raha-talous-ja-politiikka
rss-tasta-on-kyse-ivan-puopolo-verkkouutiset
rss-50100-podcast