Classical Machine Learning for Infant Medical Diagnosis with Charles Onu - TWiML Talk #112

Classical Machine Learning for Infant Medical Diagnosis with Charles Onu - TWiML Talk #112

In this episode, part 4 in our Black in AI series, i'm joined by Charles Onu, Phd Student at McGill University in Montreal & Founder of Ubenwa, a startup tackling the problem of infant mortality due to asphyxia. Using SVMs and other techniques from the field of automatic speech recognition, Charles and his team have built a model that detects asphyxia based on the audible noises the child makes upon birth. We go into the process he used to collect his training data, including the specific methods they used to record samples, and how their samples will be used to maximize accuracy in the field. We also take a deep dive into some of the challenges of building and deploying the platform and mobile application. This is a really interesting use case, which I think you’ll enjoy. Join the #MyAI Discussion! As a TWiML listener, you probably have an opinion on the role AI will play in our lives, and we want to hear your take. Sharing your thoughts takes two minutes, can be done from anywhere, and qualifies you to win some great prizes. So hit pause, and jump on over twimlai.com/myai right now to share or learn more. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. The notes for this show can be found at twimlai.com/talk/112. For complete contest details, visit twimlai.com/myai. For complete series details, visit twimlai.com/blackinai2018.

Jaksot(777)

Building Public Interest Technology with Meredith Broussard - #552

Building Public Interest Technology with Meredith Broussard - #552

Today we’re joined by Meredith Broussard, an associate professor at NYU & research director at the NYU Alliance for Public Interest Technology. Meredith was a keynote speaker at the recent NeurIPS conference, and we had the pleasure of speaking with her to discuss her talk from the event, and her upcoming book, tentatively titled More Than A Glitch: What Everyone Needs To Know About Making Technology Anti-Racist, Accessible, And Otherwise Useful To All. In our conversation, we explore Meredith’s work in the field of public interest technology, and her view of the relationship between technology and artificial intelligence. Meredith and Sam talk through real-world scenarios where an emphasis on monitoring bias and responsibility would positively impact outcomes, and how this type of monitoring parallels the infrastructure that many organizations are already building out. Finally, we talk through the main takeaways from Meredith’s NeurIPS talk, and how practitioners can get involved in the work of building and deploying public interest technology. The complete show notes for this episode can be found at twimlai.com/go/552

13 Tammi 202230min

A Universal Law of Robustness via Isoperimetry with Sebastien Bubeck - #551

A Universal Law of Robustness via Isoperimetry with Sebastien Bubeck - #551

Today we’re joined by Sebastian Bubeck a sr principal research manager at Microsoft, and author of the paper A Universal Law of Robustness via Isoperimetry, a NeurIPS 2021 Outstanding Paper Award recipient. We begin our conversation with Sebastian with a bit of a primer on convex optimization, a topic that hasn’t come up much in previous interviews. We explore the problem that convex optimization is trying to solve, the application of convex optimization to multi-armed bandit problems, metrical task systems and solving the K-server problem. We then dig into Sebastian’s paper, which looks to prove that for a broad class of data distributions and model classes, overparameterization is necessary if one wants to interpolate the data. Finally, we discussed the relationship between the paper and the work being done in the adversarial robustness community. The complete show notes for this episode can be found at twimlai.com/go/551

10 Tammi 202239min

Trends in NLP with John Bohannon - #550

Trends in NLP with John Bohannon - #550

Today we’re joined by friend of the show John Bohannon, the director of science at Primer AI, to help us showcase all of the great achievements and accomplishments in NLP in 2021! In our conversation, John shares his two major takeaways from last year, 1) NLP as we know it has changed, and we’re back into the incremental phase of the science, and 2) NLP is “eating” the rest of machine learning. We explore the implications of these two major themes across the discipline, as well as best papers, up and coming startups, great things that did happen, and even a few bad things that didn’t. Finally, we explore what 2022 and beyond will look like for NLP, from multilingual NLP to use cases for the influx of large auto-regressive language models like GPT-3 and others, as well as ethical implications that are reverberating across domains and the changes that have been ushered in in that vein. The complete show notes for this episode can be found at twimlai.com/go/550

6 Tammi 20221h 18min

Trends in Computer Vision with Georgia Gkioxari - #549

Trends in Computer Vision with Georgia Gkioxari - #549

Happy New Year! We’re excited to kick off 2022 joined by Georgia Gkioxari, a research scientist at Meta AI, to showcase the best advances in the field of computer vision over the past 12 months, and what the future holds for this domain.  Welcome back to AI Rewind! In our conversation Georgia highlights the emergence of the transformer model in CV research, what kind of performance results we’re seeing vs CNNs, and the immediate impact of NeRF, amongst a host of other great research. We also explore what is ImageNet’s place in the current landscape, and if it's time to make big changes to push the boundaries of what is possible with image, video and even 3D data, with challenges like the Metaverse, amongst others, on the horizon. Finally, we touch on the startups to keep an eye on, the collaborative efforts of software and hardware researchers, and the vibe of the “ImageNet moment” being upon us once again. The complete show notes for this episode can be found at twimlai.com/go/549

3 Tammi 202258min

Kids Run the Darndest Experiments: Causal Learning in Children with Alison Gopnik - #548

Kids Run the Darndest Experiments: Causal Learning in Children with Alison Gopnik - #548

Today we close out the 2021 NeurIPS series joined by Alison Gopnik, a professor at UC Berkeley and an invited speaker at the Causal Inference & Machine Learning: Why now? Workshop. In our conversation with Alison, we explore the question, “how is it that we can know so much about the world around us from so little information?,” and how her background in psychology, philosophy, and epistemology has guided her along the path to finding this answer through the actions of children. We discuss the role of causality as a means to extract representations of the world and how the “theory theory” came about, and how it was demonstrated to have merit. We also explore the complexity of causal relationships that children are able to deal with and what that can tell us about our current ML models, how the training and inference stages of the ML lifecycle are akin to childhood and adulthood, and much more! The complete show notes for this episode can be found at twimlai.com/go/548

27 Joulu 202136min

Hypergraphs, Simplicial Complexes and Graph Representations of Complex Systems with Tina Eliassi-Rad - #547

Hypergraphs, Simplicial Complexes and Graph Representations of Complex Systems with Tina Eliassi-Rad - #547

Today we continue our NeurIPS coverage joined by Tina Eliassi-Rad, a professor at Northeastern University, and an invited speaker at the I Still Can't Believe It's Not Better! Workshop. In our conversation with Tina, we explore her research at the intersection of network science, complex networks, and machine learning, how graphs are used in her work and how it differs from typical graph machine learning use cases. We also discuss her talk from the workshop, “The Why, How, and When of Representations for Complex Systems”, in which Tina argues that one of the reasons practitioners have struggled to model complex systems is because of the lack of connection to the data sourcing and generation process. This is definitely a NERD ALERT approved interview! The complete show notes for this episode can be found at twimlai.com/go/547

23 Joulu 202135min

Deep Learning, Transformers, and the Consequences of Scale with Oriol Vinyals - #546

Deep Learning, Transformers, and the Consequences of Scale with Oriol Vinyals - #546

Today we’re excited to kick off our annual NeurIPS, joined by Oriol Vinyals, the lead of the deep learning team at Deepmind. We cover a lot of ground in our conversation with Oriol, beginning with a look at his research agenda and why the scope has remained wide even through the maturity of the field, his thoughts on transformer models and if they will get us beyond the current state of DL, or if some other model architecture would be more advantageous. We also touch on his thoughts on the large language models craze, before jumping into his recent paper StarCraft II Unplugged: Large Scale Offline Reinforcement Learning, a follow up to their popular AlphaStar work from a few years ago. Finally, we discuss the degree to which the work that Deepmind and others are doing around games actually translates into real-world, non-game scenarios, recent work on multimodal few-shot learning, and we close with a discussion of the consequences of the level of scale that we’ve achieved thus far.   The complete show notes for this episode can be found at twimlai.com/go/546

20 Joulu 202152min

Optimization, Machine Learning and Intelligent Experimentation with Michael McCourt - #545

Optimization, Machine Learning and Intelligent Experimentation with Michael McCourt - #545

Today we’re joined by Michael McCourt the head of engineering at SigOpt. In our conversation with Michael, we explore the vast space around the topic of optimization, including the technical differences between ML and optimization and where they’re applied, what the path to increasing complexity looks like for a practitioner and the relationship between optimization and active learning. We also discuss the research frontier for optimization and how folks think about the interesting challenges and open questions for this field, how optimization approaches appeared at the latest NeurIPS conference, and Mike’s excitement for the emergence of interdisciplinary work between the machine learning community and other fields like the natural sciences. The complete show notes for this episode can be found at twimlai.com/go/545

16 Joulu 202145min

Suosittua kategoriassa Politiikka ja uutiset

rss-ootsa-kuullut-tasta
aikalisa
tervo-halme
ootsa-kuullut-tasta-2
politiikan-puskaradio
otetaan-yhdet
et-sa-noin-voi-sanoo-esittaa
rss-vaalirankkurit-podcast
aihe
rss-podme-livebox
rss-tasta-on-kyse-ivan-puopolo-verkkouutiset
linda-maria
rikosmyytit
viisupodi
rss-kuka-mina-olen
politbyroo
io-techin-tekniikkapodcast
rss-mina-ukkola
rss-hyvaa-huomenta-bryssel
rss-kuntalehti-podcast