LSTMs, Plus a Deep Learning History Lesson with Jürgen Schmidhuber - TWiML Talk #44

LSTMs, Plus a Deep Learning History Lesson with Jürgen Schmidhuber - TWiML Talk #44

This week we have a very special interview to share with you! Those of you who’ve been receiving my newsletter for a while might remember that while in Switzerland last month, I had the pleasure of interviewing Jurgen Schmidhuber, in his lab IDSIA, which is the Dalle Molle Institute for Artificial Intelligence Research in Lugano, Switzerland, where he serves as Scientific Director. In addition to his role at IDSIA, Jurgen is also Co-Founder and Chief Scientist of NNaisense, a company that is using AI to build large-scale neural network solutions for “superhuman perception and intelligent automation.” Jurgen is an interesting, accomplished and in some circles controversial figure in the AI community and we covered a lot of very interesting ground in our discussion, so much so that I couldn't truly unpack it all until I had a chance to sit with it after the fact. We talked a bunch about his work on neural networks, especially LSTM’s, or Long Short-Term Memory networks, which are a key innovation behind many of the advances we’ve seen in deep learning and its application over the past few years. Along the way, Jurgen walks us through a deep learning history lesson that spans 50+ years. It was like walking back in time with the 3 eyed raven. I know you’re really going to enjoy this one, and by the way, this is definitely a nerd alert show! For the show notes, visit twimlai.com/talk/44

Avsnitt(764)

AI Access and Inclusivity as a Technical Challenge with Prem Natarajan - #658

AI Access and Inclusivity as a Technical Challenge with Prem Natarajan - #658

Today we’re joined by Prem Natarajan, chief scientist and head of enterprise AI at Capital One. In our conversation, we discuss AI access and inclusivity as technical challenges and explore some of Prem and his team’s multidisciplinary approaches to tackling these complexities. We dive into the issues of bias, dealing with class imbalances, and the integration of various research initiatives to achieve additive results. Prem also shares his team’s work on foundation models for financial data curation, highlighting the importance of data quality and the use of federated learning, and emphasizing the impact these factors have on the model performance and reliability in critical applications like fraud detection. Lastly, Prem shares his overall approach to tackling AI research in the context of a banking enterprise, including prioritizing mission-inspired research aiming to deliver tangible benefits to customers and the broader community, investing in diverse talent and the best infrastructure, and forging strategic partnerships with a variety of academic labs. The complete show notes for this episode can be found at twimlai.com/go/658.

4 Dec 202341min

Building LLM-Based Applications with Azure OpenAI with Jay Emery - #657

Building LLM-Based Applications with Azure OpenAI with Jay Emery - #657

Today we’re joined by Jay Emery, director of technical sales & architecture at Microsoft Azure. In our conversation with Jay, we discuss the challenges faced by organizations when building LLM-based applications, and we explore some of the techniques they are using to overcome them. We dive into the concerns around security, data privacy, cost management, and performance as well as the ability and effectiveness of prompting to achieve the desired results versus fine-tuning, and when each approach should be applied. We cover methods such as prompt tuning and prompt chaining, prompt variance, fine-tuning, and RAG to enhance LLM output along with ways to speed up inference performance such as choosing the right model, parallelization, and provisioned throughput units (PTUs). In addition to that, Jay also shared several intriguing use cases describing how businesses use tools like Azure Machine Learning prompt flow and Azure ML AI Studio to tailor LLMs to their unique needs and processes. The complete show notes for this episode can be found at twimlai.com/go/657.

28 Nov 202343min

Visual Generative AI Ecosystem Challenges with Richard Zhang - #656

Visual Generative AI Ecosystem Challenges with Richard Zhang - #656

Today we’re joined by Richard Zhang, senior research scientist at Adobe Research. In our conversation with Richard, we explore the research challenges that arise when regarding visual generative AI from an ecosystem perspective, considering the disparate needs of creators, consumers, and contributors. We start with his work on perceptual metrics and the LPIPS paper, which allow us to better align human perception and computer vision and which remain used in contemporary generative AI applications such as stable diffusion, GANs, and latent diffusion. We look at his work creating detection tools for fake visual content, highlighting the importance of generalization of these detection methods to new, unseen models. Lastly, we dig into his work on data attribution and concept ablation, which aim to address the challenging open problem of allowing artists and others to manage their contributions to generative AI training data sets. The complete show notes for this episode can be found at twimlai.com/go/656.

20 Nov 202340min

Deploying Edge and Embedded AI Systems with Heather Gorr - #655

Deploying Edge and Embedded AI Systems with Heather Gorr - #655

Today we’re joined by Heather Gorr, principal MATLAB product marketing manager at MathWorks. In our conversation with Heather, we discuss the deployment of AI models to hardware devices and embedded AI systems. We explore factors to consider during data preparation, model development, and ultimately deployment, to ensure a successful project. Factors such as device constraints and latency requirements which dictate the amount and frequency of data flowing onto the device are discussed, as are modeling needs such as explainability, robustness and quantization; the use of simulation throughout the modeling process; the need to apply robust verification and validation methodologies to ensure safety and reliability; and the need to adapt and apply MLOps techniques for speed and consistency. Heather also shares noteworthy anecdotes about embedded AI deployments in industries including automotive and oil & gas. The complete show notes for this episode can be found at twimlai.com/go/655.

13 Nov 202338min

AI Sentience, Agency and Catastrophic Risk with Yoshua Bengio - #654

AI Sentience, Agency and Catastrophic Risk with Yoshua Bengio - #654

Today we’re joined by Yoshua Bengio, professor at Université de Montréal. In our conversation with Yoshua, we discuss AI safety and the potentially catastrophic risks of its misuse. Yoshua highlights various risks and the dangers of AI being used to manipulate people, spread disinformation, cause harm, and further concentrate power in society. We dive deep into the risks associated with achieving human-level competence in enough areas with AI, and tackle the challenges of defining and understanding concepts like agency and sentience. Additionally, our conversation touches on solutions to AI safety, such as the need for robust safety guardrails, investments in national security protections and countermeasures, bans on systems with uncertain safety, and the development of governance-driven AI systems. The complete show notes for this episode can be found at twimlai.com/go/654.

6 Nov 202348min

Delivering AI Systems in Highly Regulated Environments with Miriam Friedel - #653

Delivering AI Systems in Highly Regulated Environments with Miriam Friedel - #653

Today we’re joined by Miriam Friedel, senior director of ML engineering at Capital One. In our conversation with Miriam, we discuss some of the challenges faced when delivering machine learning tools and systems in highly regulated enterprise environments, and some of the practices her teams have adopted to help them operate with greater speed and agility. We also explore how to create a culture of collaboration, the value of standardized tooling and processes, leveraging open-source, and incentivizing model reuse. Miriam also shares her thoughts on building a ‘unicorn’ team, and what this means for the team she’s built at Capital One, as well as her take on build vs. buy decisions for MLOps, and the future of MLOps and enterprise AI more broadly. Throughout, Miriam shares examples of these ideas at work in some of the tools their team has built, such as Rubicon, an open source experiment management tool, and Kubeflow pipeline components that enable Capital One data scientists to efficiently leverage and scale models.  The complete show notes for this episode can be found at twimlai.com/go/653.

30 Okt 202344min

Mental Models for Advanced ChatGPT Prompting with Riley Goodside - #652

Mental Models for Advanced ChatGPT Prompting with Riley Goodside - #652

Today we’re joined by Riley Goodside, staff prompt engineer at Scale AI. In our conversation with Riley, we explore LLM capabilities and limitations, prompt engineering, and the mental models required to apply advanced prompting techniques. We dive deep into understanding LLM behavior, discussing the mechanism of autoregressive inference, comparing k-shot and zero-shot prompting, and dissecting the impact of RLHF. We also discuss the idea that prompting is a scaffolding structure that leverages the model context, resulting in achieving the desired model behavior and response rather than focusing solely on writing ability. The complete show notes for this episode can be found at twimlai.com/go/652.

23 Okt 202339min

Multilingual LLMs and the Values Divide in AI with Sara Hooker - #651

Multilingual LLMs and the Values Divide in AI with Sara Hooker - #651

Today we’re joined by Sara Hooker, director at Cohere and head of Cohere For AI, Cohere’s research lab. In our conversation with Sara, we explore some of the challenges with multilingual models like poor data quality and tokenization, and how they rely on data augmentation and preference training to address these bottlenecks. We also discuss the disadvantages and the motivating factors behind the Mixture of Experts technique, and the importance of common language between ML researchers and hardware architects to address the pain points in frameworks and create a better cohesion between the distinct communities. Sara also highlights the impact and the emotional connection that language models have created in society, the benefits and the current safety concerns of universal models, and the significance of having grounded conversations to characterize and mitigate the risk and development of AI models. Along the way, we also dive deep into Cohere and Cohere for AI, along with their Aya project, an open science project that aims to build a state-of-the-art multilingual generative language model as well as some of their recent research papers. The complete show notes for this episode can be found at twimlai.com/go/651.

16 Okt 20231h 18min

Populärt inom Politik & nyheter

svenska-fall
p3-krim
rss-krimstad
rss-viva-fotboll
fordomspodden
flashback-forever
rss-vad-fan-hande
aftonbladet-daily
rss-sanning-konsekvens
olyckan-inifran
dagens-eko
rss-frandfors-horna
krimmagasinet
rss-krimreportrarna
motiv
svd-dokumentara-berattelser-2
svd-nyhetsartiklar
rss-expressen-dok
blenda-2
spotlight