Differential Privacy at Bluecore with Zahi Karam - TWiML Talk #133

Differential Privacy at Bluecore with Zahi Karam - TWiML Talk #133

In this episode of our Differential Privacy series, I'm joined by Zahi Karam, Director of Data Science at Bluecore, whose retail marketing platform specializes in personalized email marketing. I sat down with Zahi at the Georgian Partners portfolio conference last year, where he gave me my initial exposure to the field of differential privacy, ultimately leading to this series. Zahi shared his insights into how differential privacy can be deployed in the real world and some of the technical and cultural challenges to doing so. We discuss the Bluecore use case in depth, including why and for whom they build differentially private machine learning models. The notes for this show can be found at twimlai.com/talk/133

Episoder(774)

Deep Learning is Eating 5G. Here’s How, w/ Joseph Soriaga - #525

Deep Learning is Eating 5G. Here’s How, w/ Joseph Soriaga - #525

Today we’re joined by Joseph Soriaga, a senior director of technology at Qualcomm.  In our conversation with Joseph, we focus on a pair of papers that he and his team will be presenting at Globecom later this year. The first, Neural Augmentation of Kalman Filter with Hypernetwork for Channel Tracking, details the use of deep learning to augment an algorithm to address mismatches in models, allowing for more efficient training and making models more interpretable and predictable. The second paper, WiCluster: Passive Indoor 2D/3D Positioning using WiFi without Precise Labels, explores the use of rf signals to infer what the environment looks like, allowing for estimation of a person’s movement.  We also discuss the ability for machine learning and AI to help enable 5G and make it more efficient for these applications, as well as the scenarios that ML would allow for more effective delivery of connected services, and look towards what might be possible in the near future.  The complete show notes for this episode can be found at twimlai.com/go/525.

7 Okt 202139min

Modeling Human Cognition with RNNs and Curriculum Learning, w/ Kanaka Rajan - #524

Modeling Human Cognition with RNNs and Curriculum Learning, w/ Kanaka Rajan - #524

Today we’re joined by Kanaka Rajan, an assistant professor at the Icahn School of Medicine at Mt Sinai. Kanaka, who is a recent recipient of the NSF Career Award, bridges the gap between the worlds of biology and artificial intelligence with her work in computer science. In our conversation, we explore how she builds “lego models” of the brain that mimic biological brain functions, then reverse engineers those models to answer the question “do these follow the same operating principles that the biological brain uses?” We also discuss the relationship between memory and dynamically evolving system states, how close we are to understanding how memory actually works, how she uses RNNs for modeling these processes, and what training and data collection looks like. Finally, we touch on her use of curriculum learning (where the task you want a system to learn increases in complexity slowly), and of course, we look ahead at future directions for Kanaka’s research.  The complete show notes for this episode can be found at twimlai.com/go/524.

4 Okt 202147min

Do You Dare Run Your ML Experiments in Production? with Ville Tuulos - #523

Do You Dare Run Your ML Experiments in Production? with Ville Tuulos - #523

Today we’re joined by a friend of the show and return guest Ville Tuulos, CEO and co-founder of Outerbounds. In our previous conversations with Ville, we explored his experience building and deploying the open-source framework, Metaflow, while working at Netflix. Since our last chat, Ville has embarked on a few new journeys, including writing the upcoming book Effective Data Science Infrastructure, and commercializing Metaflow, both of which we dig into quite a bit in this conversation.  We reintroduce the problem that Metaflow was built to solve and discuss some of the unique use cases that Ville has seen since it's release, the relationship between Metaflow and Kubernetes, and the maturity of services like batch and lambdas allowing a complete production ML system to be delivered. Finally, we discuss the degree to which Ville is catering is Outerbounds’ efforts to building tools for the MLOps community, and what the future looks like for him and Metaflow.  The complete show notes for this episode can be found at twimlai.com/go/523.

30 Sep 202140min

Delivering Neural Speech Services at Scale with Li Jiang - #522

Delivering Neural Speech Services at Scale with Li Jiang - #522

Today we’re joined by Li Jiang, a distinguished engineer at Microsoft working on Azure Speech.  In our conversation with Li, we discuss his journey across 27 years at Microsoft, where he’s worked on, among other things, audio and speech recognition technologies. We explore his thoughts on the advancements in speech recognition over the past few years, the challenges, and advantages, of using either end-to-end or hybrid models.  We also discuss the trade-offs between delivering accuracy or quality and the kind of runtime characteristics that you require as a service provider, in the context of engineering and delivering a service at the scale of Azure Speech. Finally, we walk through the data collection process for customizing a voice for TTS, what languages are currently supported, managing the responsibilities of threats like deep fakes, the future for services like these, and much more! The complete show notes for this episode can be found at twimlai.com/go/522.

27 Sep 202149min

AI’s Legal and Ethical Implications with Sandra Wachter - #521

AI’s Legal and Ethical Implications with Sandra Wachter - #521

Today we’re joined by Sandra Wacther, an associate professor and senior research fellow at the University of Oxford.  Sandra’s work lies at the intersection of law and AI, focused on what she likes to call “algorithmic accountability”. In our conversation, we explore algorithmic accountability in three segments, explainability/transparency, data protection, and bias, fairness and discrimination. We discuss how the thinking around black boxes changes when discussing applying regulation and law, as well as a breakdown of counterfactual explanations and how they’re created. We also explore why factors like the lack of oversight lead to poor self-regulation, and the conditional demographic disparity test that she helped develop to test bias in models, which was recently adopted by Amazon. The complete show notes for this episode can be found at twimlai.com/go/521.

23 Sep 202149min

Compositional ML and the Future of Software Development with Dillon Erb - #520

Compositional ML and the Future of Software Development with Dillon Erb - #520

Today we’re joined by Dillon Erb, CEO of Paperspace.  If you’re not familiar with Dillon, he joined us about a year ago to discuss Machine Learning as a Software Engineering Discipline; we strongly encourage you to check out that interview as well. In our conversation, we explore the idea of compositional AI, and if it is the next frontier in a string of recent game-changing machine learning developments. We also discuss a source of constant back and forth in the community around the role of notebooks, and why Paperspace made the choice to pivot towards a more traditional engineering code artifact model after building a popular notebook service. Finally, we talk through their newest release Workflows, an automation and build system for ML applications, which Dillon calls their “most ambitious and comprehensive project yet.” The complete show notes for this episode can be found at twimlai.com/go/520.

20 Sep 202141min

Generating SQL Database Queries from Natural Language with Yanshuai Cao - #519

Generating SQL Database Queries from Natural Language with Yanshuai Cao - #519

Today we’re joined by Yanshuai Cao, a senior research team lead at Borealis AI. In our conversation with Yanshuai, we explore his work on Turing, their natural language to SQL engine that allows users to get insights from relational databases without having to write code. We do a bit of compare and contrast with the recently released Codex Model from OpenAI, the role that reasoning plays in solving this problem, and how it is implemented in the model. We also talk through various challenges like data augmentation, the complexity of the queries that Turing can produce, and a paper that explores the explainability of this model. The complete show notes for this episode can be found at twimlai.com/go/519.

16 Sep 202138min

Social Commonsense Reasoning with Yejin Choi - #518

Social Commonsense Reasoning with Yejin Choi - #518

Today we’re joined by Yejin Choi, a professor at the University of Washington. We had the pleasure of catching up with Yejin after her keynote interview at the recent Stanford HAI “Foundational Models” workshop. In our conversation, we explore her work at the intersection of natural language generation and common sense reasoning, including how she defines common sense, and what the current state of the world is for that research. We discuss how this could be used for creative storytelling, how transformers could be applied to these tasks, and we dig into the subfields of physical and social common sense reasoning. Finally, we talk through the future of Yejin’s research and the areas that she sees as most promising going forward.  If you enjoyed this episode, check out our conversation on AI Storytelling Systems with Mark Riedl. The complete show notes for today’s episode can be found at twimlai.com/go/518.

13 Sep 202151min

Populært innen Politikk og nyheter

giver-og-gjengen-vg
aftenpodden
aftenpodden-usa
forklart
popradet
stopp-verden
det-store-bildet
dine-penger-pengeradet
nokon-ma-ga
fotballpodden-2
aftenbla-bla
rss-dannet-uten-piano
e24-podden
frokostshowet-pa-p5
rss-gukild-johaug
rss-ness
bt-dokumentar-2
kommentarer-fra-aftenposten
unitedno
ukrainapodden