NLP for Equity Investing with Frank Zhao - #424
Today we’re joined by Frank Zhao, Senior Director of Quantamental Research at S&P Global Market Intelligence. In our conversation with Frank, we explore how he came to work at the intersection of ML and finance, and how he navigates the relationship between data science and domain expertise. We also discuss the rise of data science in the investment management space, examining the largely under-explored technique of using unstructured data to gain insights into equity investing, and the edge it can provide for investors. Finally, Frank gives us a look at how he uses natural language processing with textual data of earnings call transcripts and walks us through the entire pipeline. The complete show notes for this episode can be found at twimlai.com/go/424.

Episoder(775)

Applications of Variational Autoencoders and Bayesian Optimization with José Miguel Hernández Lobato - #510

Applications of Variational Autoencoders and Bayesian Optimization with José Miguel Hernández Lobato - #510

Today we’re joined by José Miguel Hernández-Lobato, a university lecturer in machine learning at the University of Cambridge. In our conversation with Miguel, we explore his work at the intersection of Bayesian learning and deep learning. We discuss how he’s been applying this to the field of molecular design and discovery via two different methods, with one paper searching for possible chemical reactions, and the other doing the same, but in 3D and in 3D space. We also discuss the challenges of sample efficiency, creating objective functions, and how those manifest themselves in these experiments, and how he integrated the Bayesian approach to RL problems. We also talk through a handful of other papers that Miguel has presented at recent conferences, which are all linked at twimlai.com/go/510.

16 Aug 202142min

Codex, OpenAI’s Automated Code Generation API with Greg Brockman - #509

Codex, OpenAI’s Automated Code Generation API with Greg Brockman - #509

Today we’re joined by return guest Greg Brockman, co-founder and CTO of OpenAI. We had the pleasure of reconnecting with Greg on the heels of the announcement of Codex, OpenAI’s most recent release. Codex is a direct descendant of GPT-3 that allows users to do autocomplete tasks based on all of the publicly available text and code on the internet. In our conversation with Greg, we explore the distinct results Codex sees in comparison to GPT-3, relative to the prompts it's being given, how it could evolve given different types of training data, and how users and practitioners should think about interacting with the API to get the most out of it. We also discuss Copilot, their recent collaboration with Github that is built on Codex, as well as the implications of Codex on coding education, explainability, and broader societal issues like fairness and bias, copyrighting, and jobs.  The complete show notes for this episode can be found at twimlai.com/go/509.

12 Aug 202147min

Spatiotemporal Data Analysis with Rose Yu - #508

Spatiotemporal Data Analysis with Rose Yu - #508

Today we’re joined by Rose Yu, an assistant professor at the Jacobs School of Engineering at UC San Diego.  Rose’s research focuses on advancing machine learning algorithms and methods for analyzing large-scale time-series and spatial-temporal data, then applying those developments to climate, transportation, and other physical sciences. We discuss how Rose incorporates physical knowledge and partial differential equations in these use cases and how symmetries are being exploited. We also explore their novel neural network design that is focused on non-traditional convolution operators and allows for general symmetry, how we get from these representations to the network architectures that she has developed and another recent paper on deep spatio-temporal models.  The complete show note for this episode can be found at twimlai.com/go/508.

9 Aug 202132min

Parallelism and Acceleration for Large Language Models with Bryan Catanzaro - #507

Parallelism and Acceleration for Large Language Models with Bryan Catanzaro - #507

Today we’re joined by Bryan Catanzaro, vice president of applied deep learning research at NVIDIA. Most folks know Bryan as one of the founders/creators of cuDNN, the accelerated library for deep neural networks. In our conversation, we explore his interest in high-performance computing and its recent overlap with AI, his current work on Megatron, a framework for training giant language models, and the basic approach for distributing a large language model on DGX infrastructure.  We also discuss the three different kinds of parallelism, tensor parallelism, pipeline parallelism, and data parallelism, that Megatron provides when training models, as well as his work on the Deep Learning Super Sampling project and the role it's playing in the present and future of game development via ray tracing.  The complete show notes for this episode can be found at twimlai.com/go/507.

5 Aug 202150min

Applying the Causal Roadmap to Optimal Dynamic Treatment Rules with Lina Montoya - #506

Applying the Causal Roadmap to Optimal Dynamic Treatment Rules with Lina Montoya - #506

Today we close out our 2021 ICML series joined by Lina Montoya, a postdoctoral researcher at UNC Chapel Hill.  In our conversation with Lina, who was an invited speaker at the Neglected Assumptions in Causal Inference Workshop, we explored her work applying Optimal Dynamic Treatment (ODT) to understand which kinds of individuals respond best to specific interventions in the US criminal justice system. We discuss the concept of neglected assumptions and how it connects to ODT rule estimation, as well as a breakdown of the causal roadmap, coined by researchers at UC Berkeley.  Finally, Lina talks us through the roadmap while applying the ODT rule problem, how she’s applied a “superlearner” algorithm to this problem, how it was trained, and what the future of this research looks like. The complete show notes for this episode can be found at twimlai.com/go/506.

2 Aug 202154min

Constraint Active Search for Human-in-the-Loop Optimization with Gustavo Malkomes - #505

Constraint Active Search for Human-in-the-Loop Optimization with Gustavo Malkomes - #505

Today we continue our ICML series joined by Gustavo Malkomes, a research engineer at Intel via their recent acquisition of SigOpt.  In our conversation with Gustavo, we explore his paper Beyond the Pareto Efficient Frontier: Constraint Active Search for Multiobjective Experimental Design, which focuses on a novel algorithmic solution for the iterative model search process. This new algorithm empowers teams to run experiments where they are not optimizing particular metrics but instead identifying parameter configurations that satisfy constraints in the metric space. This allows users to efficiently explore multiple metrics at once in an efficient, informed, and intelligent way that lends itself to real-world, human-in-the-loop scenarios. The complete show notes for this episode can be found at twimlai.com/go/505.

29 Jul 202150min

Fairness and Robustness in Federated Learning with Virginia Smith -#504

Fairness and Robustness in Federated Learning with Virginia Smith -#504

Today we kick off our ICML coverage joined by Virginia Smith, an assistant professor in the Machine Learning Department at Carnegie Mellon University.  In our conversation with Virginia, we explore her work on cross-device federated learning applications, including where the distributed learning aspects of FL are relative to the privacy techniques. We dig into her paper from ICML, Ditto: Fair and Robust Federated Learning Through Personalization, what fairness means in contrast to AI ethics, the particulars of the failure modes, the relationship between models, and the things being optimized across devices, and the tradeoffs between fairness and robustness. We also discuss a second paper, Heterogeneity for the Win: One-Shot Federated Clustering, how the proposed method makes heterogeneity beneficial in data, how the heterogeneity of data is classified, and some applications of FL in an unsupervised setting. The complete show notes for this episode can be found at twimlai.com/go/504.

26 Jul 202136min

Scaling AI at H&M Group with Errol Koolmeister - #503

Scaling AI at H&M Group with Errol Koolmeister - #503

Today we’re joined by Errol Koolmeister, the head of AI foundation at H&M Group. In our conversation with Errol, we explore H&M’s AI journey, including its wide adoption across the company in 2016, and the various use cases in which it's deployed like fashion forecasting and pricing algorithms. We discuss Errol’s first steps in taking on the challenge of scaling AI broadly at the company, the value-added learning from proof of concepts, and how to align in a sustainable, long-term way. Of course, we dig into the infrastructure and models being used, the biggest challenges faced, and the importance of managing the project portfolio, while Errol shares their approach to building infra for a specific product with many products in mind.

22 Jul 202141min

Populært innen Politikk og nyheter

giver-og-gjengen-vg
aftenpodden
forklart
aftenpodden-usa
popradet
stopp-verden
nokon-ma-ga
fotballpodden-2
det-store-bildet
dine-penger-pengeradet
frokostshowet-pa-p5
rss-ness
rss-gukild-johaug
rss-dannet-uten-piano
aftenbla-bla
e24-podden
rss-penger-polser-og-politikk
unitedno
rss-gilbrantsuvatne
lydartikler-fra-aftenposten