The Future of Education and AI with Salman Khan - #423

The Future of Education and AI with Salman Khan - #423

In the final #TWIMLfest Keynote Interview, we’re joined by Salman Khan, Founder of Khan Academy. In our conversation with Sal, we explore the amazing origin story of the academy, and how coronavirus is shaping the future of education and remote and distance learning, for better and for worse. We also explore Sal’s perspective on machine learning and AI being used broadly in education, the potential of injecting a platform like Khan Academy with ML and AI for course recommendations, and if they’re planning on implementing these features in the future. Finally, Sal shares some great stories about the impact of community and opportunity, and what advice he has for learners within the TWIML community and beyond! The complete show notes for this episode can be found at twimlai.com/go/423.

Avsnitt(775)

Privacy vs Fairness in Computer Vision with Alice Xiang - #637

Privacy vs Fairness in Computer Vision with Alice Xiang - #637

Today we’re joined by Alice Xiang, Lead Research Scientist at Sony AI, and Global Head of AI Ethics at Sony Group Corporation. In our conversation with Alice, we discuss the ongoing debate between privacy and fairness in computer vision, diving into the impact of data privacy laws on the AI space while highlighting concerns about unauthorized use and lack of transparency in data usage. We explore the potential harm of inaccurate AI model outputs and the need for legal protection against biased AI products, and Alice suggests various solutions to address these challenges, such as working through third parties for data collection and establishing closer relationships with communities. Finally, we talk through the history of unethical data collection practices in CV and the emergence of generative AI technologies that exacerbate the problem, the importance of operationalizing ethical data collection and practice, including appropriate consent, representation, diversity, and compensation, and the need for interdisciplinary collaboration in AI ethics and the growing interest in AI regulation, including the EU AI Act and regulatory activities in the US. The complete show notes for this episode can be found at twimlai.com/go/637.

10 Juli 202337min

Unifying Vision and Language Models with Mohit Bansal - #636

Unifying Vision and Language Models with Mohit Bansal - #636

Today we're joined by Mohit Bansal, Parker Professor, and Director of the MURGe-Lab at UNC, Chapel Hill. In our conversation with Mohit, we explore the concept of unification in AI models, highlighting the advantages of shared knowledge and efficiency. He addresses the challenges of evaluation in generative AI, including biases and spurious correlations. Mohit introduces groundbreaking models such as UDOP and VL-T5, which achieved state-of-the-art results in various vision and language tasks while using fewer parameters. Finally, we discuss the importance of data efficiency, evaluating bias in models, and the future of multimodal models and explainability. The complete show notes for this episode can be found at twimlai.com/go/636.

3 Juli 202348min

Data Augmentation and Optimized Architectures for Computer Vision with Fatih Porikli - #635

Data Augmentation and Optimized Architectures for Computer Vision with Fatih Porikli - #635

Today we kick off our coverage of the 2023 CVPR conference joined by Fatih Porikli, a Senior Director of Technology at Qualcomm. In our conversation with Fatih, we covered quite a bit of ground, touching on a total of 12 papers/demos, focusing on topics like data augmentation and optimized architectures for computer vision. We explore advances in optical flow estimation networks, cross-model, and stage knowledge distillation for efficient 3D object detection, and zero-shot learning via language models for fine-grained labeling. We also discuss generative AI advancements and computer vision optimization for running large models on edge devices. Finally, we discuss objective functions, architecture design choices for neural networks, and efficiency and accuracy improvements in AI models via the techniques introduced in the papers.

26 Juni 202352min

Mojo: A Supercharged Python for AI with Chris Lattner - #634

Mojo: A Supercharged Python for AI with Chris Lattner - #634

Today we’re joined by Chris Lattner, Co-Founder and CEO of Modular. In our conversation with Chris, we discuss Mojo, a new programming language for AI developers. Mojo is unique in this space and simplifies things by making the entire stack accessible and understandable to people who are not compiler engineers. It also offers Python programmers the ability to make it high-performance and capable of running accelerators, making it more accessible to more people and researchers. We discuss the relationship between the Modular Engine and Mojo, the challenge of packaging Python, particularly when incorporating C code, and how Mojo aims to solve these problems to make the AI stack more dependable. The complete show notes for this episode can be found at twimlai.com/go/634

19 Juni 202357min

Stable Diffusion and LLMs at the Edge with Jilei Hou - #633

Stable Diffusion and LLMs at the Edge with Jilei Hou - #633

Today we’re joined by Jilei Hou, a VP of Engineering at Qualcomm Technologies. In our conversation with Jilei, we focus on the emergence of generative AI, and how they've worked towards providing these models for use on edge devices. We explore how the distribution of models on devices can help amortize large models' costs while improving reliability and performance and the challenges of running machine learning workloads on devices, including model size and inference latency. Finally, Jilei we explore how these emerging technologies fit into the existing AI Model Efficiency Toolkit (AIMET) framework.  The complete show notes for this episode can be found at twimlai.com/go/633

12 Juni 202340min

Modeling Human Behavior with Generative Agents with Joon Sung Park - #632

Modeling Human Behavior with Generative Agents with Joon Sung Park - #632

Today we’re joined by Joon Sung Park, a PhD Student at Stanford University. Joon shares his passion for creating AI systems that can solve human problems and his work on the recent paper Generative Agents: Interactive Simulacra of Human Behavior, which showcases generative agents that exhibit believable human behavior. We discuss using empirical methods to study these systems and the conflicting papers on whether AI models have a worldview and common sense. Joon talks about the importance of context and environment in creating believable agent behavior and shares his team's work on scaling emerging community behaviors. He also dives into the importance of a long-term memory module in agents and the use of knowledge graphs in retrieving associative information. The goal, Joon explains, is to create something that people can enjoy and empower people, solving existing problems and challenges in the traditional HCI and AI field.

5 Juni 202346min

Towards Improved Transfer Learning with Hugo Larochelle - #631

Towards Improved Transfer Learning with Hugo Larochelle - #631

Today we’re joined by Hugo Larochelle, a research scientist at Google Deepmind. In our conversation with Hugo, we discuss his work on transfer learning, understanding the capabilities of deep learning models, and creating the Transactions on Machine Learning Research journal. We explore the use of large language models in NLP, prompting, and zero-shot learning. Hugo also shares insights from his research on neural knowledge mobilization for code completion and discusses the adaptive prompts used in their system.  The complete show notes for this episode can be found at twimlai.com/go/631.

29 Maj 202338min

Language Modeling With State Space Models with Dan Fu - #630

Language Modeling With State Space Models with Dan Fu - #630

Today we’re joined by Dan Fu, a PhD student at Stanford University. In our conversation with Dan, we discuss the limitations of state space models in language modeling and the search for alternative building blocks that can help increase context length without being computationally infeasible. Dan walks us through the H3 architecture and Flash Attention technique, which can reduce the memory footprint of a model and make it feasible to fine-tune. We also explore his work on improving language models using synthetic languages, the issue of long sequence length affecting both training and inference in models, and the hope for finding something sub-quadratic that can perform language processing more effectively than the brute force approach of attention. The complete show notes for this episode can be found at https://twimlai.com/go/630

22 Maj 202328min

Populärt inom Politik & nyheter

aftonbladet-krim
svenska-fall
motiv
p3-krim
fordomspodden
blenda-2
rss-viva-fotboll
rss-krimstad
flashback-forever
aftonbladet-daily
rss-sanning-konsekvens
rss-vad-fan-hande
rss-krimreportrarna
rss-frandfors-horna
dagens-eko
sydsvenskan-dok
olyckan-inifran
rss-flodet
rss-svalan-krim
krimmagasinet