The Startup Powering The Data Behind AGI

The Startup Powering The Data Behind AGI

In this episode of Gradient Dissent, Lukas Biewald talks with the CEO & founder of Surge AI, the billion-dollar company quietly powering the next generation of frontier LLMs. They discuss Surge's origin story, why traditional data labeling is broken, and how their research-focused approach is reshaping how models are trained.

You’ll hear why inter-annotator agreement fails in high-complexity tasks like poetry and math, why synthetic data is often overrated, and how Surge builds rich RL environments to stress-test agentic reasoning. They also go deep on what kinds of data will be critical to future progress in AI—from scientific discovery to multimodal reasoning and personalized alignment.


It’s a rare, behind-the-scenes look into the world of high-quality data generation at scale—straight from the team most frontier labs trust to get it right.


Timestamps:

00:00 – Intro: Who is Edwin Chen?

03:40 – The problem with early data labeling systems

06:20 – Search ranking, clickbait, and product principles

10:05 – Why Surge focused on high-skill, high-quality labeling

13:50 – From Craigslist workers to a billion-dollar business

16:40 – Scaling without funding and avoiding Silicon Valley status games

21:15 – Why most human data platforms lack real tech

25:05 – Detecting cheaters, liars, and low-quality labelers

28:30 – Why inter-annotator agreement is a flawed metric

32:15 – What makes a great poem? Not checkboxes

36:40 – Measuring subjective quality rigorously

40:00 – What types of data are becoming more important

44:15 – Scientific collaboration and frontier research data

47:00 – Multimodal data, Argentinian coding, and hyper-specificity

50:10 – What's wrong with LMSYS and benchmark hacking

53:20 – Personalization and taste in model behavior

56:00 – Synthetic data vs. high-quality human data


Follow Weights & Biases:

https://twitter.com/weights_biases

https://www.linkedin.com/company/wandb

Avsnitt(128)

Advanced AI Accelerators and Processors with Andrew Feldman of Cerebras Systems

Advanced AI Accelerators and Processors with Andrew Feldman of Cerebras Systems

On this episode, we’re joined by Andrew Feldman, Founder and CEO of Cerebras Systems. Andrew and the Cerebras team are responsible for building the largest-ever computer chip and the fastest AI-specific processor in the industry.We discuss:- The advantages of using large chips for AI work.- Cerebras Systems’ process for building chips optimized for AI.- Why traditional GPUs aren’t the optimal machines for AI work.- Why efficiently distributing computing resources is a significant challenge for AI work.- How much faster Cerebras Systems’ machines are than other processors on the market.- Reasons why some ML-specific chip companies fail and what Cerebras does differently.- Unique challenges for chip makers and hardware companies.- Cooling and heat-transfer techniques for Cerebras machines.- How Cerebras approaches building chips that will fit the needs of customers for years to come.- Why the strategic vision for what data to collect for ML needs more discussion.Resources:Andrew Feldman - https://www.linkedin.com/in/andrewdfeldman/Cerebras Systems - https://www.linkedin.com/company/cerebras-systems/Cerebras Systems | Website - https://www.cerebras.net/Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#OCR #DeepLearning #AI #Modeling #ML

22 Juni 20231h

Enabling LLM-Powered Applications with Harrison Chase of LangChain

Enabling LLM-Powered Applications with Harrison Chase of LangChain

On this episode, we’re joined by Harrison Chase, Co-Founder and CEO of LangChain. Harrison and his team at LangChain are on a mission to make the process of creating applications powered by LLMs as easy as possible.We discuss:- What LangChain is and examples of how it works. - Why LangChain has gained so much attention. - When LangChain started and what sparked its growth. - Harrison’s approach to community-building around LangChain. - Real-world use cases for LangChain.- What parts of LangChain Harrison is proud of and which parts can be improved.- Details around evaluating effectiveness in the ML space.- Harrison's opinion on fine-tuning LLMs.- The importance of detailed prompt engineering.- Predictions for the future of LLM providers.Resources:Harrison Chase - https://www.linkedin.com/in/harrison-chase-961287118/LangChain | LinkedIn - https://www.linkedin.com/company/langchain/LangChain | Website - https://docs.langchain.com/docs/Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#OCR #DeepLearning #AI #Modeling #ML

1 Juni 202351min

Deploying Autonomous Mobile Robots with Jean Marc Alkazzi at idealworks

Deploying Autonomous Mobile Robots with Jean Marc Alkazzi at idealworks

On this episode, we’re joined by Jean Marc Alkazzi, Applied AI at idealworks. Jean focuses his attention on applied AI, leveraging the use of autonomous mobile robots (AMRs) to improve efficiency within factories and more.We discuss:- Use cases for autonomous mobile robots (AMRs) and how to manage a fleet of them. - How AMRs interact with humans working in warehouses.- The challenges of building and deploying autonomous robots.- Computer vision vs. other types of localization technology for robots.- The purpose and types of simulation environments for robotic testing.- The importance of aligning a robotic fleet’s workflow with concrete business objectives.- What the update process looks like for robots.- The importance of avoiding your own biases when developing and testing AMRs.- The challenges associated with troubleshooting ML systems.Resources: Jean Marc Alkazzi - https://www.linkedin.com/in/jeanmarcjeanazzi/idealworks |LinkedIn - https://www.linkedin.com/company/idealworks-gmbh/idealworks | Website - https://idealworks.com/Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#OCR #DeepLearning #AI #Modeling #ML

18 Maj 202358min

How EleutherAI Trains and Releases LLMs: Interview with Stella Biderman

How EleutherAI Trains and Releases LLMs: Interview with Stella Biderman

On this episode, we’re joined by Stella Biderman, Executive Director at EleutherAI and Lead Scientist - Mathematician at Booz Allen Hamilton.EleutherAI is a grassroots collective that enables open-source AI research and focuses on the development and interpretability of large language models (LLMs).We discuss:- How EleutherAI got its start and where it's headed.- The similarities and differences between various LLMs.- How to decide which model to use for your desired outcome.- The benefits and challenges of reinforcement learning from human feedback.- Details around pre-training and fine-tuning LLMs.- Which types of GPUs are best when training LLMs.- What separates EleutherAI from other companies training LLMs.- Details around mechanistic interpretability.- Why understanding what and how LLMs memorize is important.- The importance of giving researchers and the public access to LLMs.Stella Biderman - https://www.linkedin.com/in/stellabiderman/EleutherAI - https://www.linkedin.com/company/eleutherai/Resources:- https://www.eleuther.ai/Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#OCR #DeepLearning #AI #Modeling #ML

4 Maj 202357min

Scaling LLMs and Accelerating Adoption with Aidan Gomez at Cohere

Scaling LLMs and Accelerating Adoption with Aidan Gomez at Cohere

On this episode, we’re joined by Aidan Gomez, Co-Founder and CEO at Cohere. Cohere develops and releases a range of innovative AI-powered tools and solutions for a variety of NLP use cases.We discuss:- What “attention” means in the context of ML.- Aidan’s role in the “Attention Is All You Need” paper.- What state-space models (SSMs) are, and how they could be an alternative to transformers. - What it means for an ML architecture to saturate compute.- Details around data constraints for when LLMs scale.- Challenges of measuring LLM performance.- How Cohere is positioned within the LLM development space.- Insights around scaling down an LLM into a more domain-specific one.- Concerns around synthetic content and AI changing public discourse.- The importance of raising money at healthy milestones for AI development.Aidan Gomez - https://www.linkedin.com/in/aidangomez/Cohere - https://www.linkedin.com/company/cohere-ai/Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.Resources:- https://cohere.ai/- “Attention Is All You Need”#OCR #DeepLearning #AI #Modeling #ML

20 Apr 202351min

Neural Network Pruning and Training with Jonathan Frankle at MosaicML

Neural Network Pruning and Training with Jonathan Frankle at MosaicML

Jonathan Frankle, Chief Scientist at MosaicML and Assistant Professor of Computer Science at Harvard University, joins us on this episode. With comprehensive infrastructure and software tools, MosaicML aims to help businesses train complex machine-learning models using their own proprietary data.We discuss:- Details of Jonathan’s Ph.D. dissertation which explores his “Lottery Ticket Hypothesis.”- The role of neural network pruning and how it impacts the performance of ML models.- Why transformers will be the go-to way to train NLP models for the foreseeable future.- Why the process of speeding up neural net learning is both scientific and artisanal. - What MosaicML does, and how it approaches working with clients.- The challenges for developing AGI.- Details around ML training policy and ethics.- Why data brings the magic to customized ML models.- The many use cases for companies looking to build customized AI models.Jonathan Frankle - https://www.linkedin.com/in/jfrankle/Resources:- https://mosaicml.com/- The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural NetworksThanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#OCR #DeepLearning #AI #Modeling #ML

4 Apr 20231h 2min

Shreya Shankar — Operationalizing Machine Learning

Shreya Shankar — Operationalizing Machine Learning

About This EpisodeShreya Shankar is a computer scientist, PhD student in databases at UC Berkeley, and co-author of "Operationalizing Machine Learning: An Interview Study", an ethnographic interview study with 18 machine learning engineers across a variety of industries on their experience deploying and maintaining ML pipelines in production.Shreya explains the high-level findings of "Operationalizing Machine Learning"; variables that indicate a successful deployment (velocity, validation, and versioning), common pain points, and a grouping of the MLOps tool stack into four layers. Shreya and Lukas also discuss examples of data challenges in production, Jupyter Notebooks, and reproducibility.Show notes (transcript and links): http://wandb.me/gd-shreya---💬 *Host:* Lukas Biewald---*Subscribe and listen to Gradient Dissent today!*👉 Apple Podcasts: http://wandb.me/apple-podcasts​​👉 Google Podcasts: http://wandb.me/google-podcasts​👉 Spotify: http://wandb.me/spotify​

3 Mars 202354min

Sarah Catanzaro — Remembering the Lessons of the Last AI Renaissance

Sarah Catanzaro — Remembering the Lessons of the Last AI Renaissance

Sarah Catanzaro is a General Partner at Amplify Partners, and one of the leading investors in AI and ML. Her investments include RunwayML, OctoML, and Gantry.Sarah and Lukas discuss lessons learned from the "AI renaissance" of the mid 2010s and compare the general perception of ML back then to now. Sarah also provides insights from her perspective as an investor, from selling into tech-forward companies vs. traditional enterprises, to the current state of MLOps/developer tools, to large language models and hype bubbles.Show notes (transcript and links): http://wandb.me/gd-sarah-catanzaro---⏳ Timestamps: 0:00 Intro1:10 Lessons learned from previous AI hype cycles11:46 Maintaining technical knowledge as an investor19:05 Selling into tech-forward companies vs. traditional enterprises25:09 Building point solutions vs. end-to-end platforms36:27 LLMS, new tooling, and commoditization44:39 Failing fast and how startups can compete with large cloud vendors52:31 The gap between research and industry, and vice versa1:00:01 Advice for ML practitioners during hype bubbles1:03:17 Sarah's thoughts on Rust and bottlenecks in deployment1:11:23 The importance of aligning technology with people1:15:58 Outro---📝 Links📍 "Operationalizing Machine Learning: An Interview Study" (Shankar et al., 2022), an interview study on deploying and maintaining ML production pipelines: https://arxiv.org/abs/2209.09125---Connect with Sarah:📍 Sarah on Twitter: https://twitter.com/sarahcat21📍 Sarah's Amplify Partners profile: https://www.amplifypartners.com/investment-team/sarah-catanzaro---💬 Host: Lukas Biewald📹 Producers: Riley Fields, Angelica Pan---Subscribe and listen to Gradient Dissent today!👉 Apple Podcasts: http://wandb.me/apple-podcasts​​👉 Google Podcasts: http://wandb.me/google-podcasts​👉 Spotify: http://wandb.me/spotify​

2 Feb 20231h 16min

Populärt inom Business & ekonomi

badfluence
framgangspodden
varvet
rss-borsens-finest
svd-ledarredaktionen
avanzapodden
lastbilspodden
rss-svart-marknad
rss-kort-lang-analyspodden-fran-di
borsmorgon
fill-or-kill
rss-dagen-med-di
dynastin
rss-placerapodden
uppgang-och-fall
24fragor
rikatillsammans-om-privatekonomi-rikedom-i-livet
tabberaset
rss-en-rik-historia
rss-den-nya-ekonomin