AI Security - Can LLM be Attacked?

AI Security - Can LLM be Attacked?

AI Security Podcast - ChatGPT and other Generative AI use Large Language Model (LLM) but can these AI systems be attacked? ☠ 🤔 . In this 3 part AI Security series from Cloud Security Podcast Original episode, we're going to talk about the importance of AI security and how to protect your Language Model aka llm program from attack. How can LLMs be attacked by malicious threat actors - beyond the phishing email that everyone has been talking about. Who is this episode for? If you work with LLMs used by AI system or working on securing of internal LLM being built; then you would this video helpful in understanding the types of attacks that be used against a LLM.

Useful Resources are listed here: - NIST AI Risk Management Framework - ⁠⁠https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf ⁠⁠ - Attack Mitre for LLM - Atlas ⁠⁠https://atlas.mitre.org/ ⁠⁠ - OWASP Top 10 LLM - ⁠⁠https://owasp.org/www-project-top-10-for-large-language-model-applications/descriptions/⁠⁠ - The AI Attack Surface Map v1.0 - Daniel Miessler, Unsupervised Learning - ⁠⁠https://danielmiessler.com/blog/the-ai-attack-surface-map-v1-0/⁠⁠


YouTube Link to the Episode - ⁠⁠https://youtu.be/Yl9qqt9C5lE⁠⁠

Episode ShowNotes, Links and Transcript on Cloud Security Podcast: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠www.cloudsecuritypodcast.tv⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠


FREE CLOUD BOOTCAMPs on ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠www.cloudsecuritybootcamp.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠


Host Twitter: Ashish Rajan (⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@hashishrajan⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠)

Podcast Twitter - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@CloudSecPod⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@CloudSecureNews⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠


If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:

- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security News ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security BootCamp⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠


Spotify TimeStamp for Episode

(00:00) Intro (00:49) LLM Explained (01:40) LLM Application Input Prompts (03:01) Data used by LLM Applications (04:58) LLM Applications Themselves (08:15) Infrastructure used to host LLM Application (11:11) What about Responsive AI (12:05) Ways to protect LLM Applications against these attacks (13:00) Useful Resources for AI Security (13:30) How do you defend against AI Attacks? (13:38) Outro - Thank you for watching & Subscribing


See you at the next episode!

Jaksot(344)

AI Vulnerability Management: Why You Can't Patch a Neural Network

AI Vulnerability Management: Why You Can't Patch a Neural Network

Traditional vulnerability management is simple: find the flaw, patch it, and verify the fix. But what happens when the "asset" is a neural network that has learned something ethically wrong? In this e...

13 Tammi 41min

Why Backups Aren't Enough & Identity Recovery is Key against Ransomware

Why Backups Aren't Enough & Identity Recovery is Key against Ransomware

Think your cloud backups will save you from a ransomware attack? Think again. In this episode, Matt Castriotta (Field CTO at Rubrik) explains why the traditional "I have backups" mindset is dangerous....

16 Joulu 202537min

How to secure your AI Agents: A CISOs Journey

How to secure your AI Agents: A CISOs Journey

Transitioning a mature organization from an API-first model to an AI-first model is no small feat. In this episode, Yash Kosaraju, CISO of Sendbird, shares the story of how they pivoted from a traditi...

9 Joulu 202554min

AI-First Vulnerability Management: Should CISOs Build or Buy?

AI-First Vulnerability Management: Should CISOs Build or Buy?

Thinking of building your own AI security tool? In this episode, Santiago Castiñeira, CTO of Maze, breaks down the realities of the "Build vs. Buy" debate for AI-first vulnerability management.While b...

4 Joulu 20251h 1min

SIEM vs. Data Lake: Why We Ditched Traditional Logging?

SIEM vs. Data Lake: Why We Ditched Traditional Logging?

In this episode, Cliff Crosland, CEO & co-founder of Scanner.dev, shares his candid journey of trying (and initially failing) to build an in-house security data lake to replace an expensive traditiona...

2 Joulu 202546min

How to Build Trust in an AI SOC for Regulated Environments

How to Build Trust in an AI SOC for Regulated Environments

How do you establish trust in an AI SOC, especially in a regulated environment? Grant Oviatt, Head of SOC at Prophet Security and a former SOC leader at Mandiant and Red Canary, tackles this head-on a...

18 Marras 202542min

Threat Modeling the AI Agent: Architecture, Threats & Monitoring

Threat Modeling the AI Agent: Architecture, Threats & Monitoring

Are we underestimating how the agentic world is impacting cybersecurity? We spoke to Mohan Kumar, who did production security at Box for a deep dive into the threats of true autonomous AI agents.The c...

11 Marras 202547min

AI is already breaking the Silos Between AppSec & CloudSec

AI is already breaking the Silos Between AppSec & CloudSec

The silos between Application Security and Cloud Security are officially breaking down, and AI is the primary catalyst. In this episode, Tejas Dakve, Senior Manager, Application Security, Bloomberg In...

4 Marras 20251h 11min