Constraint Active Search for Human-in-the-Loop Optimization with Gustavo Malkomes - #505

Constraint Active Search for Human-in-the-Loop Optimization with Gustavo Malkomes - #505

Today we continue our ICML series joined by Gustavo Malkomes, a research engineer at Intel via their recent acquisition of SigOpt. In our conversation with Gustavo, we explore his paper Beyond the Pareto Efficient Frontier: Constraint Active Search for Multiobjective Experimental Design, which focuses on a novel algorithmic solution for the iterative model search process. This new algorithm empowers teams to run experiments where they are not optimizing particular metrics but instead identifying parameter configurations that satisfy constraints in the metric space. This allows users to efficiently explore multiple metrics at once in an efficient, informed, and intelligent way that lends itself to real-world, human-in-the-loop scenarios. The complete show notes for this episode can be found at twimlai.com/go/505.

Avsnitt(777)

Assessing Data Quality at Shopify with Wendy Foster - #592

Assessing Data Quality at Shopify with Wendy Foster - #592

Today we’re back with another installment of our Data-Centric AI series, joined by Wendy Foster, a director of engineering & data science at Shopify. In our conversation with Wendy, we explore the differences between data-centric and model-centric approaches and how they manifest at Shopify, including on her team, which is responsible for utilizing merchant and product data to assist individual vendors on the platform. We discuss how they address, maintain, and improve data quality, emphasizing the importance of coverage and “freshness” data when solving constantly evolving use cases. Finally, we discuss how data is taxonomized at the company and the challenges that present themselves when producing large-scale ML models, future use cases that Wendy expects her team to tackle, and we briefly explore Merlin, Shopify’s new ML platform (that you can hear more about at TWIMLcon!), and how it fits into the broader scope of ML at the company. The complete show notes for this episode can be found at twimlai.com/go/592

19 Sep 202236min

Transformers for Tabular Data at Capital One with Bayan Bruss - #591

Transformers for Tabular Data at Capital One with Bayan Bruss - #591

Today we’re joined by Bayan Bruss, a Sr. director of applied ML research at Capital One. In our conversation with Bayan, we dig into his work in applying various deep learning techniques to tabular data, including taking advancements made in other areas like graph CNNs and other traditional graph mining algorithms and applying them to financial services applications. We discuss why despite a “flood” of innovation in the field, work on tabular data doesn’t elicit as much fanfare despite its broad use across businesses, Bayan’s experience with the difficulty of making deep learning work on tabular data, and what opportunities have been presented for the field with the emergence of multi-modality and transformer models. We also explore a pair of papers from Bayan’s team, focused on both transformers and transfer learning for tabular data.  The complete show notes for this episode can be found at twimlai.com/go/591

12 Sep 202246min

Understanding Collective Insect Communication with ML, w/ Orit Peleg - #590

Understanding Collective Insect Communication with ML, w/ Orit Peleg - #590

Today we’re joined by Orit Peleg, an assistant professor at the University of Colorado, Boulder. Orit’s work focuses on understanding the behavior of disordered living systems, by merging tools from physics, biology, engineering, and computer science. In our conversation, we discuss how Orit found herself exploring problems of swarming behaviors and their relationship to distributed computing system architecture and spiking neurons. We look at two specific areas of research, the first focused on the patterns observed in firefly species, how the data is collected, and the types of algorithms used for optimization. Finally, we look at how Orit’s research with fireflies translates to a completely different insect, the honeybee, and what the next steps are for investigating these and other insect families. The complete show notes for this episode can be found at twimlai.com/go/590

5 Sep 202237min

Multimodal, Multi-Lingual NLP at Hugging Face with John Bohannon and Douwe Kiela - #589

Multimodal, Multi-Lingual NLP at Hugging Face with John Bohannon and Douwe Kiela - #589

In this extra special episode of the TWIML AI Podcast, a friend of the show John Bohannon leads a jam-packed conversation with Hugging Face’s recently appointed head of research Douwe Kiela. In our conversation with Douwe, we explore his role at the company, how his perception of Hugging Face has changed since joining, and what research entails at the company. We discuss the emergence of the transformer model and the emergence of BERT-ology, the recent shift to solving more multimodal problems, the importance of this subfield as one of the “Grand Directions'' of Hugging Face’s research agenda, and the importance of BLOOM, the open-access Multilingual Language Model that was the output of the BigScience project. Finally, we get into how Douwe’s background in philosophy shapes his view of current projects, as well as his projections for the future of NLP and multimodal ML. The complete show notes for this episode can be found at twimlai.com/go/589

29 Aug 202253min

Synthetic Data Generation for Robotics with Bill Vass - #588

Synthetic Data Generation for Robotics with Bill Vass - #588

Today we’re joined by Bill Vass, a VP of engineering at Amazon Web Services. Bill spoke at the most recent AWS re:MARS conference, where he delivered an engineering Keynote focused on some recent updates to Amazon sagemaker, including its support for synthetic data generation. In our conversation, we discussed all things synthetic data, including the importance of data quality when creating synthetic data, and some of the use cases that this data is being created for, including warehouses and in the case of one of their more recent acquisitions, iRobot, synthetic house generation. We also explore Astro, the household robot for home monitoring, including the types of models running it, is running, what type of on-device sensor suite it has, the relationship between the robot and the cloud, and the role of simulation.  The complete show notes for this episode can be found at twimlai.com/go/588

22 Aug 202236min

Multi-Device, Multi-Use-Case Optimization with Jeff Gehlhaar - #587

Multi-Device, Multi-Use-Case Optimization with Jeff Gehlhaar - #587

Today we’re joined by Jeff Gehlhaar, vice president of technology at Qualcomm Technologies. In our annual conversation with Jeff, we dig into the relationship between Jeff’s team on the product side and the research team, many of whom we’ve had on the podcast over the last few years. We discuss the challenges of real-world neural network deployment and doing quantization on-device, as well as a look at the tools that power their AI Stack. We also explore a few interesting automotive use cases, including automated driver assistance, and what advancements Jeff is looking forward to seeing in the next year. The complete show notes for this episode can be found at twimlai.com/go/587

15 Aug 202243min

Causal Conceptions of Fairness and their Consequences with Sharad Goel - #586

Causal Conceptions of Fairness and their Consequences with Sharad Goel - #586

Today we close out our ICML 2022 coverage joined by Sharad Goel, a professor of public policy at Harvard University. In our conversation with Sharad, we discuss his Outstanding Paper award winner Causal Conceptions of Fairness and their Consequences, which seeks to understand what it means to apply causality to the idea of fairness in ML. We explore the two broad classes of intent that have been conceptualized under the subfield of causal fairness and how they differ, the distinct ways causality is treated in economic and statistical contexts vs a computer science and algorithmic context, and why policies are created in the context of causal definitions are suboptimal broadly. The complete show notes for this episode can be found at twimlai.com/go/586

8 Aug 202237min

Brain-Inspired Hardware and Algorithm Co-Design with Melika Payvand - #585

Brain-Inspired Hardware and Algorithm Co-Design with Melika Payvand - #585

Today we continue our ICML coverage joined by Melika Payvand, a research scientist at the Institute of Neuroinformatics at the University of Zurich and ETH Zurich. Melika spoke at the Hardware Aware Efficient Training (HAET) Workshop, delivering a keynote on Brain-inspired hardware and algorithm co-design for low power online training on the edge. In our conversation with Melika, we explore her work at the intersection of ML and neuroinformatics, what makes the proposed architecture “brain-inspired”, and how techniques like online learning fit into the picture. We also discuss the characteristics of the devices that are running the algorithms she’s creating, and the challenges of adapting online learning-style algorithms to this hardware. The complete show notes for this episode can be found at twimlai.com/go/585

1 Aug 202244min

Populärt inom Politik & nyheter

svenska-fall
aftonbladet-krim
motiv
p3-krim
flashback-forever
fordomspodden
rss-viva-fotboll
aftonbladet-daily
rss-krimstad
rss-sanning-konsekvens
rss-vad-fan-hande
olyckan-inifran
spar
blenda-2
politiken
dagens-eko
rss-frandfors-horna
rss-krimreportrarna
rss-expressen-dok
krimmagasinet