
Machine Learning and Molecular Simulation – Intel on AI Season 3, Episode 10
In this episode of Intel on AI host Amir Khosrowshahi talks with Ron Dror about breakthroughs in computational biology and molecular simulation. Ron is an Associate Professor of Computer Science in the Stanford Artificial Intelligence Lab, leading a research group that uses machine learning and molecular simulation to elucidate biomolecular structure, dynamics, and function, and to guide the development of more effective medicines. Previously, Ron worked on the Anton supercomputer at D.E. Shaw Research after earning degrees in the fields of electrical engineering, computer science, biological sciences, and mathematics from MIT, Cambridge, and Rice. His groundbreaking research has been published in journals such as Science and Nature, presented at conferences like Neural Information Processing Systems (NeurIPS), and won awards from the Association of Computing Machinery (ACM) and other organizations. In the podcast episode, Ron talks about his work with several important collaborators, his interdisciplinary approach to research, and how molecular modeling has improved over the years. He goes into detail about the gen-over-gen advancements made in the Anton supercomputer, including its software, and his recent work at Stanford with molecular dynamics simulations and machine learning. The podcast closes with Amir asking detailed questions about Ron and his team’s recent paper concerning RNA structure prediction that was featured on the cover of Science. Academic research discussed in the podcast episode: Statistics of real-world illumination The Role of Natural Image Statistics in Biological Motion Estimation Surface reflectance recognition and real-world illumination statistics Accuracy of velocity estimation by Reichardt correlators Principles of Neural Design Levinthal's paradox Potassium channels Structural and Thermodynamic Properties of Selective Ion Binding in a K+ Channel Scalable Algorithms for Molecular Dynamics Simulations on Commodity Clusters Long-timescale molecular dynamics simulations of protein structure and function Parallel random numbers: as easy as 1, 2, 3 Biomolecular Simulation: A Computational Microscope for Molecular Biology Anton 2: Raising the Bar for Performance and Programmability in a Special-Purpose Molecular Dynamics Supercomputer Molecular Dynamics Simulation for All Structural basis for nucleotide exchange in heterotrimeric G proteins How GPCR Phosphorylation Patterns Orchestrate Arrestin-Mediated Signaling Highly accurate protein structure prediction with AlphaFold ATOM3D: Tasks on Molecules in Three Dimensions Geometric deep learning of RNA structure
4 Mai 202259min

AI and Nanocomputing - Intel on AI Season 3, Episode 9
In this episode of Intel on AI host Amir Khosrowshahi, assisted by Dmitri Nikonov, talks with Jean Anne Incorvia about the use of new physics in nanocomputing, specifically with spintronic logic and 2D materials. Jean is an Assistant Professor and holds the Fellow of Advanced Micro Devices Chair in Computer Engineering in the Department of Electrical and Computer Engineering at The University of Texas at Austin, where she directs the Integrated Nano Computing Lab. Dimitri is a Principal Engineer in the Components Research at Intel. He holds a Master of Science in Aeromechanical Engineering from the Moscow Institute of Physics and Technology and a Ph.D. from Texas A&M. Dimitri works in the discovery and simulation of nanoscale logic devices and manages joint research projects with multiple universities. He has authored dozens of research papers in the areas of quantum nanoelectronics, spintronics, and non-Boolean architectures. In the episode Jean talks about her background with condensed matter physics and solid-state electronics. She explains how magnetic properties and atomically thin materials, like graphene, can be leveraged at nanoscale for beyond-CMOS computing. Jean goes into detail about domain wall magnetic tunnel junctions and why such devices might have a lower energy cost than the modern process of encoding information in charge. She sees these new types of devices to be compatible with CMOS computing and part of a larger journey toward beyond-von Neumann architecture that will advance the evolution of artificial intelligence, neural networks, deep learning, machine learning, and neuromorphic computing. The episode closes with Jean, Amir, and Dimitri talking about the broadening definition of quantum computing, existential philosophy, and AI ethics. Academic research discussed in the podcast episode: Being and Time Cosmic microwave background radiation anisotropies: Their discovery and utilization Nanotube Molecular Wires as Chemical Sensors Visualization of exciton transport in ordered and disordered molecular solids Nanoscale Magnetic Materials for Energy-Efficient Spin Based Transistors Lateral Inhibition Pyramidal Neural Network for Image Classification Magnetic domain wall neuron with lateral inhibition Maximized Lateral Inhibition in Paired Magnetic Domain Wall Racetracks for Neuromorphic Computing Domain wall-magnetic tunnel junction spin–orbit torque devices and circuits for in-memory computing High-Speed CMOS-Free Purely Spintronic Asynchronous Recurrent Neural Network
30 Mar 202246min

Designing Molecules with AI – Intel on AI Season 3, Episode 8
In this episode of Intel on AI hosts Amir Khosrowshahi and Santiago Miret talk with Alán Aspuru-Guzik about the chemistry of computing and the future of materials discovery. Alán is a professor of chemistry and computer science at the University of Toronto, a Canada 150 Research Chair in theoretical chemistry, a CIFAR AI Chair at the Vector Institute, and a CIFAR Lebovic Fellow in the biology-inspired Solar Energy Program. Alán also holds a Google Industrial Research Chair in quantum computing and is the co-founder of two startups, Zapata Computing and Kebotix. Santiago Miret is an AI researcher in Intel Labs, who has an active research collaboration Alán. Santiago studies at the intersection of AI and the sciences, as well as the algorithmic development of AI for real-world problems. In the first half of the episode, the three discuss accelerating molecular design and building next generation functional materials. Alán talks about his academic background with high performance computing (HPC) that led him into the field of molecular design. He goes into detail about building a “self-driving lab” for scientific experimentation, which, coupled with advanced automation and robotics, he believes will help propel society to move beyond the era of plastics and into the era of materials by demand. Alán and Santiago talk about their research collaboration with Intel to build sophisticated model-based molecular design platforms that can scale to real-world challenges. Alán talks about the Acceleration Consortium and the need for standardization research to drive greater academic and industry collaborations for self-driving laboratories. In the second half of the episode, the three talk about quantum computing, including developing algorithms for quantum dynamics, molecular electronic structure, molecular properties, and more. Alán talks about how a simple algorithm based on thinking of the quantum computer like a musical instrument is behind the concept of the variational quantum eigensolver, which could hold promising advancements alongside classical computers. Amir, and Santiago close the episode by talking about the future of research, including projects at DARPA, oscillatory computing, quantum machine learning, quantum autoencoders, and how young technologists entering the field can advance a more equitable society. Academic research discussed in the podcast episode: The Hot Topic: What We Can Do About Global Warming Energy, Transport, & the Environment Scalable Quantum Simulation of Molecular Energies The Harvard Clean Energy Project: Large-Scale Computational Screening and Design of Organic Photovoltaics on the World Community Grid Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules Optimizing Memory Placement using Evolutionary Graph Reinforcement Learning Neuroevolution-Enhanced Multi-Objective Optimization for Mixed-Precision Quantization Organic molecules with inverted gaps between first excited singlet and triplet states and appreciable fluorescence rates Simulated Quantum Computation of Molecular Energies Towards quantum chemistry on a quantum computer Gerald McLean and Marcum Jung and others with the concept of the variational quantum eigensolver Experimental investigation of performance differences between coherent Ising machines and a quantum annealer Quantum autoencoders for efficient compression of quantum data
16 Feb 202256min

Learning with AI – Intel on AI Season 3, Episode 7
In this episode of Intel on AI host Amir Khosrowshahi and Milena Marinova talk about using artificial intelligence for professional learning. Milena is currently the Vice President of Data and AI Solutions at Microsoft. At the time of recording this podcast (April 2021), Milena was the visionary and driving force behind the award-winning AI calculus tutoring application Aida and its capabilities platform in the AI Products & Solutions Group, which she founded and led at Pearson. Bringing over 15 years of experience and knowledge in machine learning, neural networks, computer vision, and the commercialization of new technologies, Milena’s background includes an MBA from IMD in Lausanne, Switzerland and a B.Sc. with Honors in Computer Science from Caltech. She is a passionate advocate for innovation and has been a Venture Partner with Atlantic Bridge Capital, helping with AI investments and portfolio companies. Milena is also a co-founder and advisor to several startups in Europe and the US and has previously held management positions at the startup incubator Idealab, as well as executive roles at Intel. In the podcast episode Amir and Milena discuss some of the challenges of developing artificial intelligence products, going from academic research into commercial deployment, and the importance of data policy by design. Milena describes some of the lessons she’s learned over the years. Academic research discussed in the podcast episode: Learning from Data The Multi-Armed Bandit Problem: Decomposition and Computation Programmable Neural Logic Bubble Blinders: The Untold Story of the Search Business Model Regulating Innovation (conference panel) Intel RealSense Stereoscopic Depth Cameras Smart Robots: From the Lab to the World (podcast) Artificial Intelligence: A Modern Approach Self-supervised learning: The dark matter of intelligence
26 Jan 202234min

Computing with DNA – Intel on AI Season 3, Episode 6
In this episode of Intel on AI host Amir Khosrowshahi and Luis Ceze talk about building better computer architectures, molecular biology, and synthetic DNA. Luis Ceze is the Lazowska Professor in the Paul G. Allen School of Computer Science and Engineering at the University of Washington, Co-founder and CEO at OctoML, and Venture Partner at Madrona Venture Group. His research focuses on the intersection between computer architecture, programming languages, machine learning and biology. His current research focus is on approximate computing for efficient machine learning and DNA-based data storage. He co-directs the Molecular Information Systems Lab (misl.bio) and the Systems and Architectures for Machine Learning lab (sampl.ai). He has co-authored over 100 papers in these areas, and had several papers selected as IEEE Micro Top Picks and CACM Research Highlights. His research has been featured prominently in the media including New York Times, Popular Science, MIT Technology Review, Wall Street Journal, among others. He is a recipient of an NSF CAREER Award, a Sloan Research Fellowship, a Microsoft Research Faculty Fellowship, the 2013 IEEE TCCA Young Computer Architect Award, the 2020 ACM SIGARCH Maurice Wilkes Award and UIUC Distinguished Alumni Award. In the episode, Amir and Luis talk about DNA storage, which has the potential to be a million times denser than solid state storage today. Luis goes into detail about the process he and fellow researchers at the University of Washington along with a team from Microsoft went through in order to store the high-definition music video “This Too Shall Pass” by the band OK Go onto DNA. Luis also discusses why enzymatic synthesis of DNA might potentially be environmentally sustainable, the advancements being made in similarity searches, and his role in creating the open source Apache TVM project that aims to use machine learning to find the most efficient hardware and software combination optimizations. Amir and Luis end the episode talking about why multi-technology systems with electronics, photonics, molecular systems, and even quantum components could be the future of compute. Academic research discussed in the podcast episode: The biologic synthesis of deoxyribonucleic acid Towards practical, high-capacity, low-maintenance information storage in synthesized DNA DNA Hybridization Catalysts and Catalyst Circuits A simple DNA gate motif for synthesizing large-scale circuits A DNA-Based Archival Storage System Random access in large-scale DNA data storage Landscape of Next-Generation Sequencing Technologies Clustering Billions of Reads for DNA Data Storage Demonstration of End-to-End Automation of DNA Data Storage High density DNA data storage library via dehydration with digital microfluidic retrieval Probing the physical limits of reliable DNA data retrieval Stabilizing synthetic DNA for long-term data storage with earth alkaline salts Molecular-level similarity search brings computing to DNA data storage DNA Data Storage and Near-Molecule Processing for the Yottabyte Era
22 Des 202138min

Stephen Wolfram on the Current State of Artificial Intelligence – Intel on AI Season 3, Episode 5
In this episode of Intel on AI host Amir Khosrowshahi talks with Stephen Wolfram about the current state of artificial intelligence. Stephen is the founder and CEO of Wolfram Research, maker of the Wolfram Mathematica software system and WolframAlpha computational knowledge engine, author of A New Kind of Science, and most recently originator of the Wolfram Physics Project, which is a collaborative effort to find the fundamental theory of physics. In the podcast episode, Stephen talks about the computational universe and the idea that even simple programs possibly have sophisticated abilities under the Principle of Computational Equivalence, but that these abilities are perceived to be useless to humans and therefore underexplored. He discusses the need for shared computational languages that will allow people and machines to mine the wealth of available historic data so that it can be translated into useable knowledge. Amir and Stephen talk about a number of subjects during their two-hour conversation, including Emanuel Kant, Noam Chomsky, if aliens might view a completely different part of physical reality than humans, encoding values for AI content ranking, and why Stephen left academia to develop his own research institute. Stephen discusses his predictions about the limitations of quantum computing, the potential of computing at the molecular scale, and what comes after semiconductor processing. He also explains why Einstein’s theory of relatively and spacetime is misunderstood. Amir asks Stephen to explain how multiway systems and the biology of neuroscience can be viewed in harmony. Academic research discussed in the podcast episode: Critique of Pure Reason A Review of B. F. Skinner’s Verbal Behavior Perceptrons Workshop on Environments for Computational Mathematics A programming language Modern Cellular Automata: Theory and Applications Space and Time Gravitation My Time with Richard Feynman Some Relativistic and Gravitational Properties of the Wolfram Model The Wolfram Physics Project: A One-Year Update Multicomputation with Numbers: The Case of Simple Multiway Systems Algorithms for Inverse Reinforcement Learning Spiders are much smarter than you think Molecular Computation of Solutions to Combinatorial Problems A Learning Algorithm for Boltzmann Machines The Computational Brain
15 Des 20212h 14min

Moving Beyond CMOS – Intel on AI Season 3, Episode 4
In this episode of Intel on AI host Amir Khosrowshahi, assisted by Dmitri Nikonov, talks with Ian Young about Intel’s long-term research to develop more energy-efficient computing based on exploratory materials and devices as well as non-traditional architectures. Ian is Senior Fellow at Intel and the Director of the Exploratory Integrated Circuits in the Components Research. Ian was one of the key players in the advancement of dynamic and static random-access memory (DRAM, SRAM), and the integration of the bipolar junction transistor and complementary metal-oxide-semiconductor (CMOS) gate into a single integrated circuit (BiCMOS). He developed the original Phase Locked Loop (PLL) based clocking circuit in a microprocessor while working at Intel, contributing to massive improvements in computing power. Dimitri is a Principal Engineer in the Components Research at Intel. He works in the discovery and simulation of nanoscale logic devices and manages joint research projects with multiple universities. Both Ian and Dmitri have authored dozens of research papers, many together, in the areas of quantum nanoelectronics, spintronics, and non-Boolean architectures. In the podcast episode, the three talk about moving beyond CMOS architecture, which is limited by current density and heat. By exploring new materials, the hope is to make significant improvements in energy efficiency that could greatly expand the performance of deep neural networks and other types of computing. The three discuss the possible applications of ferroelectric materials, quantum tunneling, spintronics, non-volatile memory and computing, and silicon photonics. Ian talks about some of the current material challenges he and others are trying to solve, such as meeting operational performance targets and creating pristine interfaces, which mimic some of the same hurdles Intel executives Gordon Moore, Robert Noyce, and Andrew Grove faced in the past. He describes why he believes low-voltage, magneto-electric spin orbit (MESO) devices with quantum multiferroics (materials with coupled magnetic and ferroelectric order) have the most potential for improvement and wide-spread industry adoption. Academic research discussed in the podcast episode: A PLL clock generator with 5 to 110 MHz of lock range for microprocessors Clock generation and distribution for the first IA-64 microprocessor CMOS scaling trends and beyond Overview of beyond-CMOS devices and a uniform methodology for their benchmarking Benchmarking of beyond-CMOS exploratory devices for logic integrated circuits Tunnel field-effect transistors: Prospects and challenges Scalable energy-efficient magnetoelectric spin–orbit logic Beyond CMOS computing with spin and polarization Optical I/O technology for tera-scale computing Device scaling considerations for nanophotonic CMOS global interconnects Coupled-oscillator associative memory array operation for pattern recognition Convolution inference via synchronization of a coupled CMOS oscillator array Benchmarking delay and energy of neural inference circuits
8 Des 20211h 5min

The Need for New Deep Learning Architectures – Intel on AI Season 3, Episode 3
In this episode of Intel on AI host Amir Khosrowshahi and Yoshua Bengio talk about structuring future computers on the underlying physics and biology of human intelligence. Yoshua is a professor at the Department of Computer Science and Operations Research at the Université de Montréal and scientific director of the Montreal Institute for Learning Algorithms (Mila). In 2018 Yoshua received the ACM A.M. Turing Award with Geoffrey Hinton and Yann LeCun. In the episode, Yoshua and Amir discuss causal representation learning and out-of-distribution generalization, the limitations of modern hardware, and why current models are exponentially increasing amounts of data and compute only to find slight improvements. Yoshua also goes into detail about equilibrium propagation—a learning algorithm that bridges machine learning and neuroscience by computing gradients closely matching those of backpropagation. Yoshua and Amir close the episode by talking about academic publishing, sharing information, and the responsibility to make sure artificial intelligence (AI) will not be misused in society, before touching briefly on some of the projects Intel and Mila are collaborating on, such as using parallel computing for the discovery of synthesizable molecules. Academic research discussed in the podcast episode: Computing machinery and intelligence A quantitative description of membrane current and its application to conduction and excitation in nerve From System 1 Deep Learning to System 2 Deep Learning The Consciousness Prior BabyAI: A Platform to Study the Sample Efficiency of Grounded Language Learning Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation A deep learning theory for neural networks grounded in physics
1 Des 202137min