
Towards Improved Transfer Learning with Hugo Larochelle - #631
Today we’re joined by Hugo Larochelle, a research scientist at Google Deepmind. In our conversation with Hugo, we discuss his work on transfer learning, understanding the capabilities of deep learning models, and creating the Transactions on Machine Learning Research journal. We explore the use of large language models in NLP, prompting, and zero-shot learning. Hugo also shares insights from his research on neural knowledge mobilization for code completion and discusses the adaptive prompts used in their system. The complete show notes for this episode can be found at twimlai.com/go/631.
29 Maj 202338min

Language Modeling With State Space Models with Dan Fu - #630
Today we’re joined by Dan Fu, a PhD student at Stanford University. In our conversation with Dan, we discuss the limitations of state space models in language modeling and the search for alternative building blocks that can help increase context length without being computationally infeasible. Dan walks us through the H3 architecture and Flash Attention technique, which can reduce the memory footprint of a model and make it feasible to fine-tune. We also explore his work on improving language models using synthetic languages, the issue of long sequence length affecting both training and inference in models, and the hope for finding something sub-quadratic that can perform language processing more effectively than the brute force approach of attention. The complete show notes for this episode can be found at https://twimlai.com/go/630
22 Maj 202328min

Building Maps and Spatial Awareness in Blind AI Agents with Dhruv Batra - #629
Today we continue our coverage of ICLR 2023 joined by Dhruv Batra, an associate professor at Georgia Tech and research director of the Fundamental AI Research (FAIR) team at META. In our conversation, we discuss Dhruv’s work on the paper Emergence of Maps in the Memories of Blind Navigation Agents, which won an Outstanding Paper Award at the event. We explore navigation with multilayer LSTM and the question of whether embodiment is necessary for intelligence. We delve into the Embodiment Hypothesis and the progress being made in language models and caution on the responsible use of these models. We also discuss the history of AI and the importance of using the right data sets in training. The conversation explores the different meanings of "maps" across AI and cognitive science fields, Dhruv’s experience in navigating mapless systems, and the early discovery stages of memory representation and neural mechanisms. The complete show notes for this episode can be found at https://twimlai.com/go/629
15 Maj 202343min

AI Agents and Data Integration with GPT and LLaMa with Jerry Liu - #628
Today we’re joined by Jerry Liu, co-founder and CEO of Llama Index. In our conversation with Jerry, we explore the creation of Llama Index, a centralized interface to connect your external data with the latest large language models. We discuss the challenges of adding private data to language models and how Llama Index connects the two for better decision-making. We discuss the role of agents in automation, the evolution of the agent abstraction space, and the difficulties of optimizing queries over large amounts of complex data. We also discuss a range of topics from combining summarization and semantic search, to automating reasoning, to improving language model results by exploiting relationships between nodes in data. The complete show notes for this episode can be found at twimlai.com/go/628.
8 Maj 202341min

Hyperparameter Optimization through Neural Network Partitioning with Christos Louizos - #627
Today we kick off our coverage of the 2023 ICLR conference joined by Christos Louizos, an ML researcher at Qualcomm Technologies. In our conversation with Christos, we explore his paper Hyperparameter Optimization through Neural Network Partitioning and a few of his colleague's works from the conference. We discuss methods for speeding up attention mechanisms in transformers, scheduling operations for computation graphs, estimating channels in indoor environments, and adapting to distribution shifts in test time with neural network modules. We also talk through the benefits and limitations of federated learning, exploring sparse models, optimizing communication between servers and devices, and much more. The complete show notes for this episode can be found at https://twimlai.com/go/627.
1 Maj 202333min

Are LLMs Overhyped or Underappreciated? with Marti Hearst - #626
Today we’re joined by Marti Hearst, Professor at UC Berkeley. In our conversation with Marti, we explore the intricacies of AI language models and their usefulness in improving efficiency but also their potential for spreading misinformation. Marti expresses skepticism about whether these models truly have cognition compared to the nuance of the human brain. We discuss the intersection of language and visualization and the need for specialized research to ensure safety and appropriateness for specific uses. We also delve into the latest tools and algorithms such as Copilot and Chat GPT, which enhance programming and help in identifying comparisons, respectively. Finally, we discuss Marti’s long research history in search and her breakthrough in developing a standard interaction that allows for finding items on websites and library catalogs. The complete show notes for this episode can be found at https://twimlai.com/go/626.
24 Apr 202337min

Are Large Language Models a Path to AGI? with Ben Goertzel - #625
Today we’re joined by Ben Goertzel, CEO of SingularityNET. In our conversation with Ben, we explore all things AGI, including the potential scenarios that could arise with the advent of AGI and his preference for a decentralized rollout comparable to the internet or Linux. Ben shares his research in bridging neural nets, symbolic logic engines, and evolutionary programming engines to develop a common mathematical framework for AI paradigms. We also discuss the limitations of Large Language Models and the potential of hybridizing LLMs with other AGI approaches. Additionally, we chat about their work using LLMs for music generation and the limitations of formalizing creativity. Finally, Ben discusses his team's work with the OpenCog Hyperon framework and Simuli to achieve AGI, and the potential implications of their research in the future. The complete show notes for this episode can be found at https://twimlai.com/go/625
17 Apr 202359min

Open Source Generative AI at Hugging Face with Jeff Boudier - #624
Today we’re joined by Jeff Boudier, head of product at Hugging Face 🤗. In our conversation with Jeff, we explore the current landscape of open-source machine learning tools and models, the recent shift towards consumer-focused releases, and the importance of making ML tools accessible. We also discuss the growth of the Hugging Face Hub, which currently hosts over 150k models, and how formalizing their collaboration with AWS will help drive the adoption of open-source models in the enterprise. The complete show notes for this episode can be found at twimlai.com/go/624
11 Apr 202333min






















