Predictive Maintenance Using Deep Learning and Reliability Engineering with Shayan Mortazavi - #540

Predictive Maintenance Using Deep Learning and Reliability Engineering with Shayan Mortazavi - #540

Today we’re joined by Shayan Mortazavi, a data science manager at Accenture. In our conversation with Shayan, we discuss his talk from the recent SigOpt HPC & AI Summit, titled A Novel Framework Predictive Maintenance Using Dl and Reliability Engineering. In the talk, Shayan proposes a novel deep learning-based approach for prognosis prediction of oil and gas plant equipment in an effort to prevent critical damage or failure. We explore the evolution of reliability engineering, the decision to use a residual-based approach rather than traditional anomaly detection to determine when an anomaly was happening, the challenges of using LSTMs when building these models, the amount of human labeling required to build the models, and much more! The complete show notes for this episode can be found at twimlai.com/go/540

Avsnitt(777)

Generative AI at the Edge with Vinesh Sukumar - #623

Generative AI at the Edge with Vinesh Sukumar - #623

Today we’re joined by Vinesh Sukumar, a senior director and head of AI/ML product management at Qualcomm Technologies. In our conversation with Vinesh, we explore how mobile and automotive devices have different requirements for AI models and how their AI stack helps developers create complex models on both platforms. We also discuss the growing interest in text-based input and the shift towards transformers, generative content, and recommendation engines. Additionally, we explore the challenges and opportunities for ML Ops investments on the edge, including the use of synthetic data and evolving models based on user data. Finally, we delve into the latest advancements in large language models, including Prometheus-style models and GPT-4. The complete show notes for this episode can be found at twimlai.com/go/623.

3 Apr 202339min

Runway Gen-2: Generative AI for Video Creation with Anastasis Germanidis - #622

Runway Gen-2: Generative AI for Video Creation with Anastasis Germanidis - #622

Today we’re joined by Anastasis Germanidis, Co-Founder and CTO of RunwayML. Amongst all the product and model releases over the past few months, Runway threw its hat into the ring with Gen-1, a model that can take still images or video and transform them into completely stylized videos. They followed that up just a few weeks later with the release of Gen-2, a multimodal model that can produce a video from text prompts. We had the pleasure of chatting with Anastasis about both models, exploring the challenges of generating video, the importance of alignment in model deployment, the potential use of RLHF, the deployment of models as APIs, and much more! The complete show notes for this episode can be found at twimlai.com/go/622.

27 Mars 202349min

Watermarking Large Language Models to Fight Plagiarism with Tom Goldstein - 621

Watermarking Large Language Models to Fight Plagiarism with Tom Goldstein - 621

Today we’re joined by Tom Goldstein, an associate professor at the University of Maryland. Tom’s research sits at the intersection of ML and optimization and has previously been featured in the New Yorker for his work on invisibility cloaks, clothing that can evade object detection. In our conversation, we focus on his more recent research on watermarking LLM output. We explore the motivations behind adding these watermarks, how they work, and different ways a watermark could be deployed, as well as political and economic incentive structures around the adoption of watermarking and future directions for that line of work. We also discuss Tom’s research into data leakage, particularly in stable diffusion models, work that is analogous to recent guest Nicholas Carlini’s research into LLM data extraction.

20 Mars 202351min

Does ChatGPT “Think”? A Cognitive Neuroscience Perspective with Anna Ivanova - #620

Does ChatGPT “Think”? A Cognitive Neuroscience Perspective with Anna Ivanova - #620

Today we’re joined by Anna Ivanova, a postdoctoral researcher at MIT Quest for Intelligence. In our conversation with Anna, we discuss her recent paper Dissociating language and thought in large language models: a cognitive perspective. In the paper, Anna reviews the capabilities of LLMs by considering their performance on two different aspects of language use: 'formal linguistic competence', which includes knowledge of rules and patterns of a given language, and 'functional linguistic competence', a host of cognitive abilities required for language understanding and use in the real world. We explore parallels between linguistic competence and AGI, the need to identify new benchmarks for these models, whether an end-to-end trained LLM can address various aspects of functional competence, and much more!  The complete show notes for this episode can be found at twimlai.com/go/620.

13 Mars 202345min

Robotic Dexterity and Collaboration with Monroe Kennedy III - #619

Robotic Dexterity and Collaboration with Monroe Kennedy III - #619

Today we’re joined by Monroe Kennedy III, an assistant professor at Stanford, director of the Assistive Robotics and Manipulation Lab, and a national director of Black in Robotics. In our conversation with Monroe, we spend some time exploring the robotics landscape, getting Monroe’s thoughts on the current challenges in the field, as well as his opinion on choreographed demonstrations like the dancing Boston Robotics machines. We also dig into his work around two distinct threads, Robotic Dexterity, (what does it take to make robots capable of doing manipulation useful tasks with and for humans?) and Collaborative Robotics (how do we go beyond advanced autonomy in robots towards making effective robotic teammates capable of working with human counterparts?). Finally, we discuss DenseTact, an optical-tactile sensor capable of visualizing the deformed surface of a soft fingertip and using that image in a neural network to perform calibrated shape reconstruction and 6-axis wrench estimation. The complete show notes for this episode can be found at twimlai.com/go/619.

6 Mars 202352min

Privacy and Security for Stable Diffusion and LLMs with Nicholas Carlini - #618

Privacy and Security for Stable Diffusion and LLMs with Nicholas Carlini - #618

Today we’re joined by Nicholas Carlini, a research scientist at Google Brain. Nicholas works at the intersection of machine learning and computer security, and his recent paper “Extracting Training Data from LLMs” has generated quite a buzz within the ML community. In our conversation, we discuss the current state of adversarial machine learning research, the dynamic of dealing with privacy issues in black box vs accessible models, what privacy attacks in vision models like diffusion models look like, and the scale of “memorization” within these models. We also explore Nicholas’ work on data poisoning, which looks to understand what happens if a bad actor can take control of a small fraction of the data that an ML model is trained on. The complete show notes for this episode can be found at twimlai.com/go/618.

27 Feb 202343min

Understanding AI’s Impact on Social Disparities with Vinodkumar Prabhakaran - #617

Understanding AI’s Impact on Social Disparities with Vinodkumar Prabhakaran - #617

Today we’re joined by Vinodkumar Prabhakaran, a Senior Research Scientist at Google Research. In our conversation with Vinod, we discuss his two main areas of research, using ML, specifically NLP, to explore these social disparities, and how these same social disparities are captured and propagated within machine learning tools. We explore a few specific projects, the first using NLP to analyze interactions between police officers and community members, determining factors like level of respect or politeness and how they play out across a spectrum of community members. We also discuss his work on understanding how bias creeps into the pipeline of building ML models, whether it be from the data or the person building the model. Finally, for those working with human annotators, Vinod shares his thoughts on how to incorporate principles of fairness to help build more robust models.  The complete show notes for this episode can be found at https://twimlai.com/go/617.

20 Feb 202331min

AI Trends 2023: Causality and the Impact on Large Language Models with Robert Osazuwa Ness - #616

AI Trends 2023: Causality and the Impact on Large Language Models with Robert Osazuwa Ness - #616

Today we’re joined by Robert Osazuwa Ness, a senior researcher at Microsoft Research, to break down the latest trends in the world of causal modeling. In our conversation with Robert, we explore advances in areas like causal discovery, causal representation learning, and causal judgements. We also discuss the impact causality could have on large language models, especially in some of the recent use cases we’ve seen like Bing Search and ChatGPT. Finally, we discuss the benchmarks for causal modeling, the top causality use cases, and the most exciting opportunities in the field.   The complete show notes for this episode can be found at twimlai.com/go/616.

14 Feb 20231h 22min

Populärt inom Politik & nyheter

svenska-fall
aftonbladet-krim
motiv
p3-krim
fordomspodden
flashback-forever
rss-viva-fotboll
rss-krimstad
aftonbladet-daily
rss-sanning-konsekvens
spar
blenda-2
rss-krimreportrarna
rss-frandfors-horna
rss-vad-fan-hande
dagens-eko
olyckan-inifran
krimmagasinet
rss-flodet
spotlight