How to Build Trust in an AI SOC for Regulated Environments

How to Build Trust in an AI SOC for Regulated Environments

How do you establish trust in an AI SOC, especially in a regulated environment? Grant Oviatt, Head of SOC at Prophet Security and a former SOC leader at Mandiant and Red Canary, tackles this head-on as a self-proclaimed "AI skeptic". Grant shared that after 15 years of being "scared to death" by high-false-positive AI, modern LLMs have changed the game .

The key to trust lies in two pillars: explainability (is the decision reasonable?) and traceability (can you audit the entire data trail, including all 40-50 queries?) . Grant talks about yje critical architectural components for regulated industries, including single-tenancy , bring-your-own-cloud (BYOC) for data sovereignty , and model portability.

In this episode we will be comparing AI SOC to traditional MDRs and talking about real-world "bake-off" results where an AI SOC had 99.3% agreement with a human team on 12,000 alerts but was 11x faster, with an average investigation time of just four minutes .


Guest Socials -⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠Grant's Linkedin

Podcast Twitter - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@CloudSecPod⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:

-⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Podcast- Youtube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Newsletter ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

If you are interested in AI Cybersecurity, you can check out our sister podcast -⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ AI Security Podcast⁠


(00:00) Introduction(02:00) Who is Grant Oviatt?(02:30) How to Establish Trust in an AI SOC for Regulated Environments(03:45) Explainability vs. Traceability: The Two Pillars of Trust(06:00) The "Hard SOC Life": Pre-AI vs. AI SOC(09:00) From AI Skeptic to AI SOC Founder: What Changed? (10:50) The "Aha!" Moment: Breaking Problems into Bite-Sized Pieces(12:30) What Regulated Bodies Expect from an AI SOC(13:30) Data Management: The Key for Regulated Industries (PII/PHI) (14:40) Why Point-in-Time Queries are Safer than a SIEM (15:10) Bring-Your-Own-Cloud (BYOC) for Financial Services (16:20) Single-Tenant Architecture & No Training on Customer Data (17:40) Bring-Your-Own-Model: The Rise of Model Portability (19:20) AI SOC vs. MDR: Can it Replace Your Provider? (19:50) The 4-Minute Investigation: Speed & Custom Detections (21:20) The Reality of Building Your Own AI SOC (Build vs. Buy)(23:10) Managing Model Drift & Updates(24:30) Why Prophet Avoids MCPs: The Lack of Auditability (26:10) How Far Can AI SOC Go? (Analysis vs. Threat Hunting)(27:40) The Future: From "Human in the Loop" to "Manager in the Loop" (28:20) Do We Still Need a Human in the Loop? (95% Auto-Closed) (29:20) The Red Lines: What AI Shouldn't Automate (Yet) (30:20) The Problem with "Creative" AI Remediation(33:10) What AI SOC is Not Ready For (Risk Appetite)(35:00) Gaining Confidence: The 12,000 Alert Bake-Off (99.3% Agreement) (37:40) Fun Questions: Iron Mans, Texas BBQ & Seafood


Thank you to Prophet Security for sponsoring this episode.

Jaksot(343)

Browser Security Explained: Consent Phishing, "Click Fix" Attacks & The Limits of EDR

Browser Security Explained: Consent Phishing, "Click Fix" Attacks & The Limits of EDR

Is your security team treating your Identity Provider (IDP) like a firewall? In this episode, Adam Bateman (CEO & Co-founder of Push Security) explains why that's a dangerous mistake and how modern at...

10 Maalis 46min

Is AI Hallucinations a Myth and the Real Threat from AI

Is AI Hallucinations a Myth and the Real Threat from AI

Are attackers really using AI to run end-to-end cyber campaigns? In this episode, Edward Wu (Founder and CEO, DropzoneAI) joins Ashish to separate the hype from reality when it comes to AI-driven atta...

6 Maalis 40min

Why AI Infrastructure is Harder to Secure Than Cloud

Why AI Infrastructure is Harder to Secure Than Cloud

Is AI security just "Cloud Security 2.0"? Toni De La Fuente, creator of the open-source tool Prowler, joins Ashish to explain why securing AI workloads requires a fundamentally different approach than...

20 Helmi 34min

How Attackers Bypass AI Guardrails with Natural Language

How Attackers Bypass AI Guardrails with Natural Language

In the world of Generative AI, natural language has become the new executable. Attackers no longer need complex code to breach your systems, sometimes, asking for a "poem" is enough to steal your pass...

10 Helmi 46min

Vulnerability Management vs. Exposure Management

Vulnerability Management vs. Exposure Management

In this episode, Brad Hibbert (COO & Chief Strategy Officer at Brinqa) joins Ashish to explain why traditional risk-based vulnerability management (RBVM) is no longer enough in a cloud-first world .We...

6 Helmi 39min

Is Developer Friendly AI Security Possible with MCP & Shadow AI

Is Developer Friendly AI Security Possible with MCP & Shadow AI

Is "developer-friendly" AI security actually possible? In this episode, Bryan Woolgar-O'Neil (CTO & Co-founder of Harmonic Security) joins Ashish to dismantle the traditional "block everything" approa...

5 Helmi 1h 3min

Why AI Can't Replace Detection Engineers: Build vs. Buy & The Future of SOC

Why AI Can't Replace Detection Engineers: Build vs. Buy & The Future of SOC

Is the AI SOC a reality, or just vendor hype? In this episode, Antoinette Stevens (Principal Security Engineer at Ramp) joins Ashish to dissect the true state of AI in detection engineering.Antoinette...

21 Tammi 52min

AI Vulnerability Management: Why You Can't Patch a Neural Network

AI Vulnerability Management: Why You Can't Patch a Neural Network

Traditional vulnerability management is simple: find the flaw, patch it, and verify the fix. But what happens when the "asset" is a neural network that has learned something ethically wrong? In this e...

13 Tammi 41min