How Lovable Manages 100+ Daily Changes, Vibe Coding & Shadow AI

How Lovable Manages 100+ Daily Changes, Vibe Coding & Shadow AI

What does it actually look like to run security inside one of Europe's fastest-growing AI companies? In this episode, recorded live at the Munich Cybersecurity Conference (MCSC), Ashish Rajan sat down with Igor Andriushchenko Head of Security at Lovable, the AI-native platform that lets anyone build and ship full applications without writing a line of code.

Igor joined Lovable as employee #40. Six months later, the team had grown to 150+. Developers were running multi-agent workflows overnight, PMs were pushing pull requests, and the volume of code changes was hitting numbers that challenged every traditional security process they had. This is the security story nobody talks about in AI-native scale-ups and Igor lived it.

In this episode, they cover: why your CI/CD pipeline is being load-tested to destruction by AI-generated churn · how to use PAM (Privileged Access Management) as a practical guardrail for AI agents that can't escalate to production secrets · why the allow-list vs deny-list logic is reversed for AI agents compared to traditional security · the overlooked SCA supply chain risk when AI recommends unmaintained or hallucinated packages · why old SAST tools are failing and what the new generation of agentic code scanners does differently · how to identify and manage advanced, intermediate, and basic AI users in your org without killing their productivity · and the practical "crawl, walk, run" approach to building internal AI security tooling that actually sticks.

Igor also shares how Lovable's security team built an incident response AI skill, uses reachability analysis agents to triage SCA findings for enterprise customers, and why the real investment isn't in the AI model, it's in the skills ecosystem and data connections underneath.


Questions asked:

(00:00) Introduction: Securing the AI Workforce(03:50) Who is Igor Andriushchenko? (Head of Security, Lovable) (06:10) The Churn of Change: Why AI Will Break Your CI/CD (10:40) The FOMO Problem: Don't Force AI Adoption (11:50) The "Air Pocket" Strategy for Safe AI Experimentation (14:00) The Context Paradox: More Access = Dumber AI (17:40) Managing Agent Sprawl and "Advanced" Users (19:40) Why You Must Treat AI Agents Like Human Developers (PAM Controls) (22:30) The Need for AI Telemetry & Visibility (27:50) Blurring Roles: When PMs Become Developers (31:30) Why You Must Use "Deny Lists" Instead of "Allow Lists" for AI (34:30) AI SAST vs. Traditional SAST: Finding Business Logic Flaws (39:40) Supply Chain Risks: When AI Recommends Dead Libraries (45:40) Building Custom AI Skills for Incident Response (52:50) Fun Questions: Battlefield, Team Culture, and Comfort Food

Avsnitt(51)

The Zero-Click AI Hack: How to Contain the Blast Radius of Autonomous Agents

The Zero-Click AI Hack: How to Contain the Blast Radius of Autonomous Agents

Is an AI agent's identity a workload or an action? Ashish spoke to Elie Bursztein, Distinguished Research Scientist and co-author of Google SAIF (Secure AI Framework) about how it is neither and that ...

29 Apr 47min

Buy vs. Build AI Security: Why [Box.com](http://Box.com) CISO is Creating their Own Agentic SOC

Buy vs. Build AI Security: Why [Box.com](http://Box.com) CISO is Creating their Own Agentic SOC

If your AI solution is just helping humans process the same amount of alerts a little faster, you haven't transformed anything, you've just created a faster hamster wheel.In this episode, Ashish and C...

22 Apr 46min

Anthropic's Project Mythos: Why the "Zero-Day Machine" is Terrifying the Security Industry

Anthropic's Project Mythos: Why the "Zero-Day Machine" is Terrifying the Security Industry

In this episode, Ashish and Caleb discuss the internet-breaking preview of Project Mythos, an unreleased AI model from Anthropic that has shown an unprecedented, terrifying ability to reason through c...

18 Apr 1h 3min

Are AI Security Startups Faking It? How to Separate Signal from Noise

Are AI Security Startups Faking It? How to Separate Signal from Noise

With over 70 startups claiming to have built the perfect "AI SOC Analyst" or "AI Threat Hunter," how do you separate the real products from the vaporware? Recorded live at Decibel RSAC Founder Festiva...

15 Apr 47min

Questions Every CISO Must Ask AI Security Vendors

Questions Every CISO Must Ask AI Security Vendors

RSA Conference 2026 is here and the AI agent hype machine is louder than ever. In this episode, Ashish and Caleb cut through the noise and arm CISOs, practitioners, and security teams with a clear-eye...

18 Mars 50min

Will Foundation Models Kill Security Startups?

Will Foundation Models Kill Security Startups?

Did Anthropic just kill the AppSec industry? Following the announcement of Claude Code Security, a tool that finds, reasons about, and fixes code vulnerabilities, major security stocks dropped by 8% ....

5 Mars 59min

How to Build Your Own AI Chief of Staff with Claude Code

How to Build Your Own AI Chief of Staff with Claude Code

What if you could automate your entire work life with a personal AI Chief of Staff? In this episode, Caleb Sima reveals "Pepper," his custom-built AI agent to Ashish that manages emails, schedules mee...

11 Feb 47min

Populärt inom Teknik

natets-morka-sida
uppgang-och-fall
elbilsveckan
market-makers
rss-technokratin
bilar-med-sladd
rss-laddstationen-med-elbilen-i-sverige
rss-elektrikerpodden
skogsforum-podcast
har-vi-akt-till-mars-an
rss-it-sakerhetspodden
rss-powerboat-sverige-podcast
bli-saker-podden
rss-uppgang-och-fall
hej-bruksbil
rss-veckans-ai
rss-snacka-om-ai
developers-mer-an-bara-kod
rss-fabriken-2
rss-en-ai-till-kaffet