Unsupervised Learning: No. 187

Unsupervised Learning: No. 187

Lots of people in the security community went silly over the FaceApp application last week, basically saying that you shouldn't be using the application because they'll steal your face and then be able to impersonate you. Oh, and then it turned out to be a Russian company who put out the application, and that made it 100x worse. The problem here is the lack of Threat Model Thinking. When it comes to election security, propaganda discussions, etc., I am quite concerned about Putin's willingness and ability to harm our country's cohesion through memes and social media. But that does not extend to some random company stealing faces. Why? Because before you can get legitimately concerned about something, you have to be able to describe a threat scenario in which that thing becomes dangerous. As I talked about in this piece, pictures of your face are not the same as your face when it comes to biometric authentication. There's a reason companies need a specific device, combined with their custom algorithm, in order to enroll you in a facial identification system. They scan you in a very specific way and then store your data (which is just a representation, not your actual face) in a very specific way. Then they need to use that same exact system to scan you again, so they can compare the two representations to each other. That isn't happening with random apps that have pictures of you. And even if that were the case, they could just get your face off your social media, where those same people who are worried are more than happy to take selfies, put their pictures on profile pictures, and make sure as many people see them as possible. There are actual negative things that can be done with images (like making Deepfakes of you), and that will get easier over time, but the defense for that is to have zero pictures of you…anywhere. And once again you have to ask who would be doing that to you, and why. Bottom line: authentication systems take special effort to try to ensure that the input given is the same as the enrollment item, e.g., (face, fingerprint, etc.), so it will not be easy any time soon to go from a random picture to something that can full a face scanner or fingerprint reader at the airport. People reading this probably already know this, but spread the word: threat modeling is one of our best tools for removing emotion from risk management.

A contractor named SyTech that does work with Russian FSB has been breached, resulting in the release of 7.5TB of data on the FSB's various projects. This is obviously embarrassing for SyTech and the FSB, but the leaked projects focused on de-anonymization, spying on Russian businesses, and the project to break Russia away from the Internet, which are all known and expected efforts. So there don't seem to be any big reveals as a result of the leak. More

Someone discovered that a bunch of browser extensions were reading things they shouldn't be, and sending them out to places they shouldn't be. This is not surprising to me. Chrome extensions are like Android apps, which should tell you all you need to know about installing random ones that seem interesting. My policy on browser extensions is extremely strict for this reason. People need to understand how insane the entire idea of the modern web is. We're visiting URLs that are executing code on our machines. And not just code from that website, but code from thousands of other websites in an average browsing session. It's a garbage fire. And the only defense really is to question how much you trust your browser, your operating system, and the original site you're visiting. But even then you're still exposing yourself to significant and continuously-evolving risk when you run around clicking things online. And the worst possible thing you can do in this situation is install more functionality, which gives more parties, more access, to that giant stack of assumptions you're making just by using a web browser. The best possible stance is to have as few people possible with access to your particular dumpster. And that means installing as few highly-vetted add-ons as possible. More

Become a Member: https://danielmiessler.com/upgrade

See omnystudio.com/listener for privacy information.

Avsnitt(532)

Why I Think Karpathy is Wrong on the AGI Timeline

Why I Think Karpathy is Wrong on the AGI Timeline

Karpathy is confusing LLM limitations with AI system limitations, and that makes all the difference. Become a Member: https://danielmiessler.com/upgradeSee omnystudio.com/listener for privacy information.

20 Okt 9min

Novelty Exploration vs. Pattern Exploitation

Novelty Exploration vs. Pattern Exploitation

How going from exploration to exploitation can help you as both a consumer and creator of everything.Become a Member: https://danielmiessler.com/upgradeSee omnystudio.com/listener for privacy information.

15 Okt 3min

Magnifying Time

Magnifying Time

Some thoughts on how novelty and attention magnify the time that we have. Become a Member: https://danielmiessler.com/upgradeSee omnystudio.com/listener for privacy information.

14 Okt 6min

A Conversation With Harry Wetherald CO-Founder & CEO At Maze

A Conversation With Harry Wetherald CO-Founder & CEO At Maze

➡ Stay Ahead of Cyber Threats with AI-Driven Vulnerability Management with Maze:https://mazehq.com/ In this conversation, I speak with Harry about how AI is transforming vulnerability management and application security. We explore how modern approaches can move beyond endless reports and generic fixes, toward real context-aware workflows that actually empower developers and security teams. We talk about: The Real Problem in Vulnerability ManagementWhy remediation—not just prioritization—remains the toughest challenge, and how AI can help bridge the gap between vulnerabilities and the developers who need to fix them. Context, Ownership, and VelocityHow linking vulnerabilities to the right applications and teams inside their daily tools (like GitHub) reduces friction, speeds up patching, and improves security without slowing developers down. AI Agents and the Future of SecurityWhy we should think of AI agents as “extra eyes and hands,” and how they’re reshaping everything from threat detection to system design, phishing campaigns, and organizational defense models. Attackers Move FirstHow attackers are already building unified world models of their targets using AI, and why defenders need to match (or exceed) this intelligence to stay ahead. From Days to MinutesWhy the tolerance for vulnerability windows is shrinking fast, and how automation and AI are pushing us toward a future where hours—or even minutes—make the difference. Subscribe to the newsletter at:https://danielmiessler.com/subscribe Join the UL community at:https://danielmiessler.com/upgrade Follow on X:https://x.com/danielmiessler Follow on LinkedIn:https://www.linkedin.com/in/danielmiessler Chapters: 00:00 – Welcome and Harry’s Background01:07 – The Real Problem: Remediation vs. Prioritization04:31 – Breaking Down Vulnerability Context and Threat Intel05:46 – Connecting Vulnerabilities to Developers and Workflows08:01 – Why Traditional Vulnerability Management Fails10:29 – Startup Lessons and The State of AI Agents13:26 – DARPA’s AI Cybersecurity Competition14:29 – System Design: Deterministic Code vs. AI16:05 – How the Product Works and Data Sources18:01 – AI as “Extra Eyes and Hands” in Security20:20 – Breaking Barriers: Rethinking Scale with AI23:22 – Building World Models for Defense (and Attack)25:22 – Attackers Move Faster: Why Context Matters27:04 – Phishing at Scale with AI Agents31:24 – Shrinking Windows of Vulnerability: From Days to Minutes32:47 – What’s Next for Harry’s Work34:13 – Closing ThoughtsBecome a Member: https://danielmiessler.com/upgradeSee omnystudio.com/listener for privacy information.

22 Sep 35min

A Conversation With Grant Lee CO-Founder & CEO At Gamma

A Conversation With Grant Lee CO-Founder & CEO At Gamma

➡ Upgrade your presentations with Gamma, the best AI presentation maker: https://gamma.app In this conversation, I speak with Grant, co-founder of Gamma, about how their platform is transforming presentations and idea-sharing. Instead of starting with slides, Gamma helps you focus on the story first—then builds the visuals, structure, and delivery around it using AI. We talk about: From Slides to StoriesWhy presentations should begin with narrative flow and core ideas, not pre-existing slide templates. Gamma enables creators to design around the message rather than being trapped by the format. AI as Your Presentation PartnerHow Gamma acts like a personal design expert—adjusting layouts, visuals, and style in real time—similar to having a world-class presentation coach and designer by your side. Idea Propagation Beyond SlidesWhy Gamma isn’t just about “slides,” but about propagating ideas in the right medium: presentations, video overlays, mobile-first content, or even context-based imagery and clips. The Future of GammaWhere the platform is headed in the next few years, and how AI-driven storytelling will redefine the way we share ideas across industries. Subscribe to the newsletter:https://danielmiessler.com/subscribe Join the UL community:https://danielmiessler.com/upgrade Follow on X:https://x.com/danielmiessler Follow on LinkedIn:https://www.linkedin.com/in/danielmiessler Chapters: 00:00 – Introduction to Unsupervised Learning00:17 – Welcome Grant and Gamma’s Background01:31 – AI Trends Driving Presentation Innovation03:20 – Story First: Rethinking Workflow Beyond Slides04:29 – Building Narrative Flow Before Design07:42 – Gamma as an AI Presentation Partner09:43 – What Gamma Does Differently from Other Tools12:27 – Idea Propagation: Matching Message, Medium, and Audience13:23 – Enhancing Presentations with Images, Clips, and Context15:15 – Current Graphics and Animation Options17:03 – Most Popular and Favorite Features in Gamma18:05 – What’s Coming Soon in Gamma19:08 – The Future of Idea Propagation with AI20:46 – Where to Learn More About Gamma21:21 – Closing ThoughtsBecome a Member: https://danielmiessler.com/upgradeSee omnystudio.com/listener for privacy information.

18 Sep 21min

UL NO. 497: STANDARD EDITION | More NPM Shenanigans, I Open Sourced Kai, Blood Work Results, Finding Vulns in a 10-line Prompt, and more...

UL NO. 497: STANDARD EDITION | More NPM Shenanigans, I Open Sourced Kai, Blood Work Results, Finding Vulns in a 10-line Prompt, and more...

UL NO. 497: STANDARD EDITION | More NPM Shenanigans, I Open Sourced Kai, Blood Work Results, Finding Vulns in a 10-line Prompt, and more... Read this episode online: https://newsletter.danielmiessler.com/p/ul-497 Subscribe to the newsletter at:https://danielmiessler.com/subscribe Join the UL community at:https://danielmiessler.com/upgrade Follow on X:https://x.com/danielmiessler Follow on LinkedIn:https://www.linkedin.com/in/danielmiesslerBecome a Member: https://danielmiessler.com/upgradeSee omnystudio.com/listener for privacy information.

10 Sep 37min

UL NO. 496: STANDARD EDITION | New Video on Building my Personal AI System, Anthropic Reveals One-person Hacking Company using Claude, Pentagon Says China Keeps Penetrating, and more...

UL NO. 496: STANDARD EDITION | New Video on Building my Personal AI System, Anthropic Reveals One-person Hacking Company using Claude, Pentagon Says China Keeps Penetrating, and more...

UL NO. 496: STANDARD EDITION | New Video on Building my Personal AI System, Anthropic Reveals One-person Hacking Company using Claude, Pentagon Says China Keeps Penetrating, and more... Read this episode online: https://newsletter.danielmiessler.com/p/ul-496 Personal AI Video I'm so excited about Subscribe to the newsletter at:https://danielmiessler.com/subscribe Join the UL community at:https://danielmiessler.com/upgrade Follow on X:https://x.com/danielmiessler Follow on LinkedIn:https://www.linkedin.com/in/danielmiesslerBecome a Member: https://danielmiessler.com/upgradeSee omnystudio.com/listener for privacy information.

5 Sep 1h 2min

A Conversation with Michael Brown About Designing AI Systems

A Conversation with Michael Brown About Designing AI Systems

In this episode of Unsupervised Learning, I sit down with Michael Brown, Principal Security Engineer at Trail of Bits, to dive deep into the design and lessons learned from the AI Cyber Challenge (AIxCC). Michael led the team behind Buttercup, an AI-driven system that secured 2nd place overall. We discuss: -The design philosophy behind Buttercup and how it blended deterministic systems with AI/ML -Why modular architectures and “best of both worlds” approaches outperform pure LLM-heavy -designs -How large language models performed in patch generation and fuzzing support -The risks of compounding errors in AI pipelines — and how to avoid them -Broader lessons for applying AI in cybersecurity and beyond If you’re interested in AI, security engineering, or system design at scale, this conversation breaks down what worked, what didn’t, and where the field is heading. Subscribe to the newsletter at:https://danielmiessler.com/subscribe Join the UL community at:https://danielmiessler.com/upgrade Follow on X:https://x.com/danielmiessler Follow on LinkedIn:https://www.linkedin.com/in/danielmiesslerBecome a Member: https://danielmiessler.com/upgradeSee omnystudio.com/listener for privacy information.

22 Aug 50min

Populärt inom Teknik

uppgang-och-fall
elbilsveckan
bilar-med-sladd
skogsforum-podcast
market-makers
rss-elektrikerpodden
natets-morka-sida
rss-racevecka
rss-technokratin
mediepodden
developers-mer-an-bara-kod
har-vi-akt-till-mars-an
ai-sweden-podcast
solcellskollens-podcast
rss-uppgang-och-fall
hej-bruksbil
rss-fabriken-2
under-femton
rss-it-sakerhetspodden
rss-laddstationen-med-elbilen-i-sverige