AI for SOC Automation: A Blueprint for the New world of Incident Response

AI for SOC Automation: A Blueprint for the New world of Incident Response

The nature of Security Operations is changing. As cloud environments grow in complexity and data volumes explode, traditional approaches to detection and response are proving insufficient. This episode features an in-depth conversation with Kyle Polley, who leads the AI security team at Perplexity, about a modern blueprint for the Security Operations Center (SOC).

The discussion centers on a necessary architectural shift away from traditional SIEMs, which were not built for today's scale, toward a "data lake infrastructure built for detection and response". Kyle explains how this model provides the scalability needed to handle modern data loads and enables a more effective incident response process.

A cornerstone of this new model is the use of centralized AI agents. The conversation explores how these agents can be tasked with performing in-depth alert investigations, helping to reduce analyst burnout and allowing security teams to focus on more proactive, high-impact work. This approach moves beyond simple automation to create a system where AI augments and enhances the capabilities of the human team.


Guest Socials -⁠⁠ ⁠⁠⁠⁠Kyle's Linkedin

Podcast Twitter - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@CloudSecPod⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:

-⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Podcast- Youtube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Newsletter ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security BootCamp⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

If you are interested in AI Cybersecurity, you can check out our sister podcast -⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ AI Cybersecurity Podcast


Questions asked:

(00:00) Introduction to Kyle Polley & The Future of SOCs(01:03) The Core Argument: Why You Must Build Your SOC Before Compliance(03:34) Beyond the Certificate: The Difference Between Being Compliant vs. Secure(04:20) Today's #1 AI Threat: The Challenge of Prompt Injection(06:00) The Architectural Flaw: Handling Untrusted Data in AI Systems(08:20) The "Security Data Lake": Moving Beyond the Traditional SIEM(15:00) The Future is Now: A Centralized AI Agent for Automated Investigations(20:06) Will AI Take My Job? How AI Elevates, Not Replaces, the Security Analyst(25:20) Redefining "Shifting Left" with Personal AI Security Agents(31:00) Can AI Reason? How Modern AI Agents Intelligently Query Logs(37:05) Rethinking Incident Response Playbooks in the Age of AI(41:00) The MVP SOC: A Practical Roadmap for Small & Medium Companies(46:08) Final Questions: Maintaining Optimism, Woodworking, and Tex-Mex(50:08) Where to Connect with Kyle Polley


Resources spoken about during the episode:

Easy Agents: an open-source framework

How to give every department their own AI Agent

Jaksot(343)

Browser Security Explained: Consent Phishing, "Click Fix" Attacks & The Limits of EDR

Browser Security Explained: Consent Phishing, "Click Fix" Attacks & The Limits of EDR

Is your security team treating your Identity Provider (IDP) like a firewall? In this episode, Adam Bateman (CEO & Co-founder of Push Security) explains why that's a dangerous mistake and how modern at...

10 Maalis 46min

Is AI Hallucinations a Myth and the Real Threat from AI

Is AI Hallucinations a Myth and the Real Threat from AI

Are attackers really using AI to run end-to-end cyber campaigns? In this episode, Edward Wu (Founder and CEO, DropzoneAI) joins Ashish to separate the hype from reality when it comes to AI-driven atta...

6 Maalis 40min

Why AI Infrastructure is Harder to Secure Than Cloud

Why AI Infrastructure is Harder to Secure Than Cloud

Is AI security just "Cloud Security 2.0"? Toni De La Fuente, creator of the open-source tool Prowler, joins Ashish to explain why securing AI workloads requires a fundamentally different approach than...

20 Helmi 34min

How Attackers Bypass AI Guardrails with Natural Language

How Attackers Bypass AI Guardrails with Natural Language

In the world of Generative AI, natural language has become the new executable. Attackers no longer need complex code to breach your systems, sometimes, asking for a "poem" is enough to steal your pass...

10 Helmi 46min

Vulnerability Management vs. Exposure Management

Vulnerability Management vs. Exposure Management

In this episode, Brad Hibbert (COO & Chief Strategy Officer at Brinqa) joins Ashish to explain why traditional risk-based vulnerability management (RBVM) is no longer enough in a cloud-first world .We...

6 Helmi 39min

Is Developer Friendly AI Security Possible with MCP & Shadow AI

Is Developer Friendly AI Security Possible with MCP & Shadow AI

Is "developer-friendly" AI security actually possible? In this episode, Bryan Woolgar-O'Neil (CTO & Co-founder of Harmonic Security) joins Ashish to dismantle the traditional "block everything" approa...

5 Helmi 1h 3min

Why AI Can't Replace Detection Engineers: Build vs. Buy & The Future of SOC

Why AI Can't Replace Detection Engineers: Build vs. Buy & The Future of SOC

Is the AI SOC a reality, or just vendor hype? In this episode, Antoinette Stevens (Principal Security Engineer at Ramp) joins Ashish to dissect the true state of AI in detection engineering.Antoinette...

21 Tammi 52min

AI Vulnerability Management: Why You Can't Patch a Neural Network

AI Vulnerability Management: Why You Can't Patch a Neural Network

Traditional vulnerability management is simple: find the flaw, patch it, and verify the fix. But what happens when the "asset" is a neural network that has learned something ethically wrong? In this e...

13 Tammi 41min