The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders. Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader. Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, computer science, data science and more.

Jaksot(764)

Studying Machine Intelligence with Been Kim - #571

Studying Machine Intelligence with Been Kim - #571

Today we continue our ICLR coverage joined by Been Kim, a staff research scientist at Google Brain, and an ICLR 2022 Invited Speaker. Been, whose research has historically been focused on interpretability in machine learning, delivered the keynote Beyond interpretability: developing a language to shape our relationships with AI, which explores the need to study AI machines as scientific objects, in isolation and with humans, which will provide principles for tools, but also is necessary to take our working relationship with AI to the next level.  Before we dig into Been’s talk, she characterizes where we are as an industry and community with interpretability, and what the current state of the art is for interpretability techniques. We explore how the Gestalt principles appear in neural networks, Been’s choice to characterize communication with machines as a language as opposed to a set of principles or foundational understanding, and much much more. The complete show notes for this episode can be found at twimlai.com/go/571

9 Touko 202252min

Advances in Neural Compression with Auke Wiggers - #570

Advances in Neural Compression with Auke Wiggers - #570

Today we’re joined by Auke Wiggers, an AI research scientist at Qualcomm. In our conversation with Auke, we discuss his team’s recent research on data compression using generative models. We discuss the relationship between historical compression research and the current trend of neural compression, and the benefit of neural codecs, which learn to compress data from examples. We also explore the performance evaluation process and the recent developments that show that these models can operate in real-time on a mobile device. Finally, we discuss another ICLR paper, “Transformer-based transform coding”, that proposes a vision transformer-based architecture for image and video coding, and some of his team’s other accepted works at the conference.  The complete show notes for this episode can be found at twimlai.com/go/570

2 Touko 202237min

Mixture-of-Experts and Trends in Large-Scale Language Modeling with Irwan Bello - #569

Mixture-of-Experts and Trends in Large-Scale Language Modeling with Irwan Bello - #569

Today we’re joined by Irwan Bello, formerly a research scientist at Google Brain, and now on the founding team at a stealth AI startup. We begin our conversation with an exploration of Irwan’s recent paper, Designing Effective Sparse Expert Models, which acts as a design guide for building sparse large language model architectures. We discuss mixture of experts as a technique, the scalability of this method, and it's applicability beyond NLP tasks the data sets this experiment was benchmarked against. We also explore Irwan’s interest in the research areas of alignment and retrieval, talking through interesting lines of work for each area including instruction tuning and direct alignment. The complete show notes for this episode can be found at twimlai.com/go/569

25 Huhti 202246min

Daring to DAIR: Distributed AI Research with Timnit Gebru - #568

Daring to DAIR: Distributed AI Research with Timnit Gebru - #568

Today we’re joined by friend of the show Timnit Gebru, the founder and executive director of DAIR, the Distributed Artificial Intelligence Research Institute. In our conversation with Timnit, we discuss her journey to create DAIR, their goals and some of the challenges shes faced along the way. We start is the obvious place, Timnit being “resignated” from Google after writing and publishing a paper detailing the dangers of large language models, the fallout from that paper and her firing, and the eventual founding of DAIR. We discuss the importance of the “distributed” nature of the institute, how they’re going about figuring out what is in scope and out of scope for the institute’s research charter, and what building an institution means to her. We also explore the importance of independent alternatives to traditional research structures, if we should be pessimistic about the impact of internal ethics and responsible AI teams in industry due to the overwhelming power they wield, examples she looks to of what not to do when building out the institute, and much much more! The complete show notes for this episode can be found at twimlai.com/go/568

18 Huhti 202251min

Hierarchical and Continual RL with Doina Precup - #567

Hierarchical and Continual RL with Doina Precup - #567

Today we’re joined by Doina Precup, a research team lead at DeepMind Montreal, and a professor at McGill University. In our conversation with Doina, we discuss her recent research interests, including her work in hierarchical reinforcement learning, with the goal being agents learning abstract representations, especially over time. We also explore her work on reward specification for RL agents, where she hypothesizes that a reward signal in a complex environment could lead an agent to develop attributes of intuitive intelligence. We also dig into quite a few of her papers, including On the Expressivity of Markov Reward, which won a NeruIPS 2021 outstanding paper award. Finally, we discuss the analogy between hierarchical RL and CNNs, her work in continual RL, and her thoughts on the evolution of RL in the recent past and present, and the biggest challenges facing the field going forward. The complete show notes for this episode can be found at twimlai.com/go/567

11 Huhti 202250min

Open-Source Drug Discovery with DeepChem with Bharath Ramsundar - #566

Open-Source Drug Discovery with DeepChem with Bharath Ramsundar - #566

Today we’re joined by Bharath Ramsundar, founder and CEO of Deep Forest Sciences. In our conversation with Bharath, we explore his work on the DeepChem, an open-source library for drug discovery, materials science, quantum chemistry, and biology tools. We discuss the challenges that biotech and pharmaceutical companies are facing as they attempt to incorporate AI into the drug discovery process, where the innovation frontier is, and what the promise is for AI in this field in the near term. We also dig into the origins of DeepChem and the problems it's solving for practitioners, the capabilities that are enabled when using this library as opposed to others, and MoleculeNET, a dataset and benchmark focused on molecular design that lives within the DeepChem suite. The complete show notes for this episode can be found at twimlai.com/go/566

4 Huhti 202229min

Advancing Hands-On Machine Learning Education with Sebastian Raschka - #565

Advancing Hands-On Machine Learning Education with Sebastian Raschka - #565

Today we’re joined by Sebastian Raschka, an assistant professor at the University of Wisconsin-Madison and lead AI educator at Grid.ai. In our conversation with Sebastian, we explore his work around AI education, including the “hands-on” philosophy that he takes when building these courses, his recent book Machine Learning with PyTorch and Scikit-Learn, his advise to beginners in the field when they’re trying to choose tools and frameworks, and more.  We also discuss his work on Pytorch Lightning, a platform that allows users to organize their code and integrate it into other technologies, before switching gears and discuss his recent research efforts around ordinal regression, including a ton of great references that we’ll link on the show notes page below!  The complete show notes for this episode can be found at twimlai.com/go/565

28 Maalis 202240min

Big Science and Embodied Learning at Hugging Face 🤗 with Thomas Wolf - #564

Big Science and Embodied Learning at Hugging Face 🤗 with Thomas Wolf - #564

Today we’re joined by Thomas Wolf, co-founder and chief science officer at Hugging Face 🤗. We cover a ton of ground In our conversation, starting with Thomas’ interesting backstory as a quantum physicist and patent lawyer, and how that lead him to a career in machine learning. We explore how Hugging Face began, what the current direction is for the company, and how much of their focus is NLP and language models versus other disciplines. We also discuss the BigScience project, a year-long research workshop where 1000+ researchers of all backgrounds and disciplines have come together to create an 800GB multilingual dataset and model. We talk through their approach to curating the dataset, model evaluation at this scale, and how they differentiate their work from projects like Eluther AI. Finally, we dig into Thomas’ work on multimodality, his thoughts on the metaverse, his new book NLP with Transformers, and much more! The complete show notes for this episode can be found at twimlai.com/go/564

21 Maalis 202247min

Suosittua kategoriassa Politiikka ja uutiset

rss-ootsa-kuullut-tasta
ootsa-kuullut-tasta-2
aikalisa
rss-podme-livebox
politiikan-puskaradio
rss-vaalirankkurit-podcast
otetaan-yhdet
et-sa-noin-voi-sanoo-esittaa
rikosmyytit
linda-maria
rss-hyvaa-huomenta-bryssel
the-ulkopolitist
rss-sinivalkoinen-islam
rss-kaikki-uusiksi
rss-raha-talous-ja-politiikka
rss-pallo-keskelle-2
rss-mina-ukkola
rss-polikulaari-humanisti-vastaa-ja-muut-ts-podcastit
rss-merja-mahkan-rahat
rss-50100-podcast