AI at the NASA Frontier Development Lab with Sara Jennings, Timothy Seabrook and Andres Rodriguez

AI at the NASA Frontier Development Lab with Sara Jennings, Timothy Seabrook and Andres Rodriguez

This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests. In this episode i'm joined by Sara Jennings, Timothy Seabrook and Andres Rodriguez to discuss NASA’s Frontier Development Lab or FDL. The FDL is an intense 8-week applied AI research accelerator, focused on tackling knowledge gaps useful to the space program. In our discussion, Sara, producer at the FDL, provides some insight into its goals and structure. Timothy, a researcher at FDL, describes his involvement with the program, including some of the projects he worked on while on-site. He also provides a look into some of this year’s FDL projects, including Planetary Defense, Solar Storm Prediction, and Lunar Water Location. Last but not least, Andres, Sr. Principal Engineer at Intel's AIPG, joins us to detail Intel’s support of the FDL, and how the various elements of the Intel AI stack supported the FDL research. This is a jam packed conversation, so be sure to check the show notes page at twimlai.com/talk/89 for all the links and tidbits from this episode.

Episoder(766)

Using AI to Map the Human Immune System w/ Jabran Zahid - #485

Using AI to Map the Human Immune System w/ Jabran Zahid - #485

Today we’re joined by Jabran Zahid, a Senior Researcher at Microsoft Research. In our conversation with Jabran, we explore their recent endeavor into the complete mapping of which T-cells bind to which antigens through the Antigen Map Project. We discuss how Jabran’s background in astrophysics and cosmology has translated to his current work in immunology and biology, the origins of the antigen map, the biological and how the focus was changed by the emergence of the coronavirus pandemic. We talk through the biological advancements, and the challenges of using machine learning in this setting, some of the more advanced ML techniques that they’ve tried that have not panned out (as of yet), the path forward for the antigen map to make a broader impact, and much more. The complete show notes for this episode can be found at twimlai.com/go/485.

20 Mai 202141min

Learning Long-Time Dependencies with RNNs w/ Konstantin Rusch - #484

Learning Long-Time Dependencies with RNNs w/ Konstantin Rusch - #484

Today we conclude our 2021 ICLR coverage joined by Konstantin Rusch, a PhD Student at ETH Zurich. In our conversation with Konstantin, we explore his recent papers, titled coRNN and uniCORNN respectively, which focus on a novel architecture of recurrent neural networks for learning long-time dependencies. We explore the inspiration he drew from neuroscience when tackling this problem, how the performance results compared to networks like LSTMs and others that have been proven to work on this problem and Konstantin’s future research goals. The complete show notes for this episode can be found at twimlai.com/go/484.

17 Mai 202137min

What the Human Brain Can Tell Us About NLP Models with Allyson Ettinger - #483

What the Human Brain Can Tell Us About NLP Models with Allyson Ettinger - #483

Today we continue our ICLR ‘21 series joined by Allyson Ettinger, an Assistant Professor at the University of Chicago.  One of our favorite recurring conversations on the podcast is the two-way street that lies between machine learning and neuroscience, which Allyson explores through the modeling of cognitive processes that pertain to language. In our conversation, we discuss how she approaches assessing the competencies of AI, the value of control of confounding variables in AI research, and how the pattern matching traits of Ml/DL models are not necessarily exclusive to these systems.  Allyson also participated in a recent panel discussion at the ICLR workshop How Can Findings About The Brain Improve AI Systems?, centered around the utility of brain inspiration for developing AI models. We discuss ways in which we can try to more closely simulate the functioning of a brain, where her work fits into the analysis and interpretability area of NLP, and much more! The complete show notes for this episode can be found at twimlai.com/go/483.

13 Mai 202138min

Probabilistic Numeric CNNs with Roberto Bondesan - #482

Probabilistic Numeric CNNs with Roberto Bondesan - #482

Today we kick off our ICLR 2021 coverage joined by Roberto Bondesan, an AI Researcher at Qualcomm.  In our conversation with Roberto, we explore his paper Probabilistic Numeric Convolutional Neural Networks, which represents features as Gaussian processes, providing a probabilistic description of discretization error. We discuss some of the other work the team at Qualcomm presented at the conference, including a paper called Adaptive Neural Compression, as well as work on Guage Equvariant Mesh CNNs. Finally, we briefly discuss quantum deep learning, and what excites Roberto and his team about the future of their research in combinatorial optimization.   The complete show notes for this episode can be found at https://twimlai.com/go/482

10 Mai 202141min

Building a Unified NLP Framework at LinkedIn with Huiji Gao - #481

Building a Unified NLP Framework at LinkedIn with Huiji Gao - #481

Today we’re joined by Huiji Gao, a Senior Engineering Manager of Machine Learning and AI at LinkedIn.  In our conversation with Huiji, we dig into his interest in building NLP tools and systems, including a recent open-source project called DeText, a framework for generating models for ranking classification and language generation. We explore the motivation behind DeText, the landscape at LinkedIn before and after it was put into use broadly, and the various contexts it’s being used in at the company. We also discuss the relationship between BERT and DeText via LiBERT, a version of BERT that is trained and calibrated on LinkedIn data, the practical use of these tools from an engineering perspective, the approach they’ve taken to optimization, and much more! The complete show notes for this episode can be found at https://twimlai.com/go/481.

6 Mai 202134min

Dask + Data Science Careers with Jacqueline Nolis - #480

Dask + Data Science Careers with Jacqueline Nolis - #480

Today we’re joined by Jacqueline Nolis, Head of Data Science at Saturn Cloud, and co-host of the Build a Career in Data Science Podcast.  You might remember Jacqueline from our Advancing Your Data Science Career During the Pandemic panel, where she shared her experience trying to navigate the suddenly hectic data science job market. Now, a year removed from that panel, we explore her book on data science careers, top insights for folks just getting into the field, ways that job seekers should be signaling that they have the required background, and how to approach and navigate failure as a data scientist.  We also spend quite a bit of time discussing Dask, an open-source library for parallel computing in Python, as well as use cases for the tool, the relationship between dask and Kubernetes and docker containers, where data scientists are in regards to the software development toolchain and much more! The complete show notes for this episode can be found at https://twimlai.com/go/480.

3 Mai 202134min

Machine Learning for Equitable Healthcare Outcomes with Irene Chen - #479

Machine Learning for Equitable Healthcare Outcomes with Irene Chen - #479

Today we’re joined by Irene Chen, a Ph.D. student at MIT.  Irene’s research is focused on developing new machine learning methods specifically for healthcare, through the lens of questions of equity and inclusion. In our conversation, we explore some of the various projects that Irene has worked on, including an early detection program for intimate partner violence.  We also discuss how she thinks about the long term implications of predictions in the healthcare domain, how she’s learned to communicate across the interface between the ML researcher and clinician, probabilistic approaches to machine learning for healthcare, and finally, key takeaways for those of you interested in this area of research. The complete show notes for this episode can be found at https://twimlai.com/go/479.

29 Apr 202136min

AI Storytelling Systems with Mark Riedl - #478

AI Storytelling Systems with Mark Riedl - #478

Today we’re joined by Mark Riedl, a Professor in the School of Interactive Computing at Georgia Tech. In our conversation with Mark, we explore his work building AI storytelling systems, mainly those that try and predict what listeners think will happen next in a story and how he brings together many different threads of ML/AI together to solve these problems. We discuss how the theory of mind is layered into his research, the use of large language models like GPT-3, and his push towards being able to generate suspenseful stories with these systems.  We also discuss the concept of intentional creativity and the lack of good theory on the subject, the adjacent areas in ML that he’s most excited about for their potential contribution to his research, his recent focus on model explainability, how he approaches problems of common sense, and much more!  The complete show notes for this episode can be found at https://twimlai.com/go/478.

26 Apr 202141min

Populært innen Politikk og nyheter

giver-og-gjengen-vg
aftenpodden
popradet
aftenpodden-usa
forklart
stopp-verden
bt-dokumentar-2
det-store-bildet
dine-penger-pengeradet
nokon-ma-ga
fotballpodden-2
aftenbla-bla
frokostshowet-pa-p5
e24-podden
rss-dannet-uten-piano
rss-fredrik-og-zahid-loser-ingenting
rss-ness
rss-penger-polser-og-politikk
unitedno
liverpoolno-pausepraten