
AI & the Fifth Domain of Warfare: A Talk with Eyal Balicer, Cybersecurity Innovator & Thought Leader
By Michael Matias, CEO of Clarity and Forbes 30 Under 30 alumCybersecurity has entered a new domain—literally. As Eyal Balicer put it in our recent conversation: “Cyberspace is now the fifth domain of warfare.” But in this domain, the battleground isn’t just code. It’s control.Eyal brings a rare vantage point to the AI-cyber nexus—he’s held senior cybersecurity roles in the Israeli government, Fortune 100 companies like Citi, and top-tier venture capital. Our discussion centered on a growing truth: with the rise of agentic systems—AI entities that can act, decide, and evolve—the mission of cybersecurity is changing.“We are shifting from a world where intelligence was a scarce resource to one where that is not necessarily the case; true agency is becoming the new elusive driver of prosperity and growth,” Eyal explained. “The systems we are now securing are not just automated—they are autonomous, highly competent, and opaque. Traditional defenses cannot keep up.”It’s a shift I’ve seen firsthand at Clarity, where we build proactive defenses against deepfakes and AI-generated phishing. Just like Shahar Peled told me about agentic AI revolutionizing offensive testing, Eyal sees these agents redefining global threat models. The challenge isn’t identifying known threats—it’s safeguarding systems that learn, adapt, and act independently.And that requires a new security architecture.“An AI agent can have fluid permissions, context-based roles, and evolving identity,” he told me. “Conventional IAM just does not cut it anymore.” In that sense, Balicer echoes voices like Ron Nissim and Alon Jackson, who both called for a redefinition of identity management in the AI age.But Eyal's view goes broader: geopolitical. He sees cybersecurity not just as a business enabler, but as a pillar of national resilience. From financial systems to defense infrastructure, the stakes have never been higher. “The future will require autonomous cybersecurity—not just automated, pre-defined playbooks, but real-time, adaptive agents who reason and defend at the edge, with varying degrees of human intervention.”We discussed how the regulatory map only adds to the complexity. “The fragmentation across jurisdictions makes cross-border cybersecurity brittle,” Eyal warned. His point was clear: the only viable strategy is proactive, agentic, adaptable security. Not static controls. Not red-alert dashboards.The recent acquisition of Wiz by Google, he said, is just the beginning. “This will eventually lead to an entrepreneurial Cambrian explosion in cybersecurity,” Eyal predicted. And much like Dorit Dor told me, the organizations that survive will be those that move fast—and let AI lead the charge.As we closed, his advice was blunt: “Ignoring this revolution is not an option. AI evolves daily. Your security safeguards and controls should not lag behind.”My takeaway? Eyal isn’t just talking about new tools—he’s laying out a new doctrine.Agentic AI isn’t coming. It’s here. And if we don’t secure it now, we risk losing control of the systems that already make decisions for us.The future of cyber isn’t just proactive. It’s autonomous. And it’s already reshaping the balance of power.
14 Sep 42min

Unlocking GRC Potential with AI: A Conversation with Yair Kuznitsov, CEO of Anecdotes
By Michael Matias, CEO of Clarity and Forbes 30 Under 30 alumThe intersection of Governance, Risk, and Compliance (GRC) and artificial intelligence (AI) marks one of today’s most significant business transformations. In my recent conversation with Yair Kuznitsov, an expert in AI and GRC, it became clear that GRC’s role within enterprises has fundamentally shifted, driven by rapid AI adoption.Kuznitsov, whose team spent the past year on rigorous AI research in the GRC domain, highlighted the critical role of proprietary data in achieving enterprise-grade accuracy. “It’s very difficult to create AI that addresses specific use cases with high accuracy without training it on highly specific and vertical datasets,” he explained. Proprietary data isn’t just helpful—it’s essential for the trust enterprises demand.Historically, GRC was seen as a gatekeeper—slowing innovation with rigid compliance requirements. Today, however, modern GRC teams are becoming enablers of innovation. As Kuznitsov put it: “Historically, GRC was a gatekeeper slowing innovation. Today, modern GRC teams enable innovation, ensuring trust remains intact.” This shift reflects the rising complexity created by global expansion, cloud adoption, and the proliferation of SaaS tools.The scale of risk is staggering. Gartner projects that by 2025, 85% of enterprises will operate mainly in the cloud, challenging traditional compliance frameworks. GRC functions must now assess regulations rapidly while supporting swift, secure market entry. AI is uniquely positioned to meet this demand—but only if accuracy reaches the 80–90% confidence enterprises require. That confidence, Kuznitsov emphasized, depends on training AI with proprietary, vertical datasets.At Clarity, we’ve seen firsthand how tailored AI models dramatically strengthen cybersecurity. AI doesn’t just upgrade compliance workflows—it transforms GRC from a reactive bottleneck into a proactive driver of innovation.Kuznitsov also underscored how traditional compliance, rooted in static documents, has become chaotic in the face of globalization and fast-paced tech adoption. AI addresses this chaos, automating assessments, policy checks, and risk monitoring at speeds previously unimaginable. Here again, the differentiator is proprietary data. By grounding AI in enterprise-specific datasets, organizations secure the accuracy needed to maintain trust. As Kuznitsov noted, “Vertical AI solutions achieve high value by providing tailored accuracy for specific enterprise use cases.”The lesson is clear: enterprises that embrace AI-driven GRC today will not just adapt, they’ll thrive. The evolution from passive gatekeeping to active enabling is no longer optional—it’s essential. Those that ignore this transformation risk being left behind in an increasingly complex regulatory landscape.Enterprises must urgently rethink their approach to GRC. The AI era demands dynamic, proactive, and precise compliance strategies, rooted in proprietary data and vertical AI solutions. The choice is stark: adopt AI-driven GRC to accelerate innovation and maintain trust, or remain stuck in outdated practices and growing risk.
14 Sep 26min

Cybersecurity’s AI Moment: A Conversation with Ron Peled, Founder of Sola
By Michael Matias, CEO of Clarity and Forbes 30 Under 30 alumWe’re not easing into the AI era—we’ve been thrown into it. That’s how Ron Peled, founder of Sola and former CISO at LivePerson, described the current moment: “The Big Bang already happened. Now we’re just trying to contain the blast.”Ron’s clarity around AI’s impact on cybersecurity is jarring—and refreshing. He doesn’t talk about threat vectors or dashboards. He talks about control. About complexity. About what breaks when the world adopts generative AI faster than the security stack can respond.Like many of the leaders I’ve spoken with—Dorit Dor at Check Point, Tsion Gonen at Protego, Elik Etzion at Elron—Ron sees AI as both threat and tool. But what makes his voice unique is his insistence on simplicity. “The era of bloated, ‘luxury’ cybersecurity products is over,” he said. “What teams need now is focus. Precision. Clean UX. Solutions that don’t overwhelm—just work.”That perspective echoes something Tom Mes told me about CISOs today: they don’t have the luxury of managing theoretical risk. Their job is to enable the business without drowning in alerts or friction. In that sense, Sola’s approach reflects a growing pattern—cybersecurity as a minimalist, embedded experience.But Ron takes it further. He doesn’t just want cleaner tools. He wants community-based security. “Everyone talks about shared defense,” he said. “Almost no one builds for it.” He sees this as the next evolution—not more tech, but more collective action. A model that looks more like a neighborhood watch and less like a gated compound.It’s a mindset I’ve come to share at Clarity. Whether we’re tackling deepfakes or phishing automation, the organizations moving fastest are the ones who build for people, not just systems. The ones who remember that cybersecurity is, at its core, a coordination challenge.Ron also spoke to the evolving role of the CISO. No longer just a gatekeeper, the modern security leader needs to be what he calls “a productively paranoid operator”—someone who sees what’s coming and builds guardrails without becoming the roadblock.That term stuck with me: productively paranoid.Because that’s exactly what AI requires. The attacks are faster. The tools are more accessible. And the stakes—for privacy, for resilience, for trust—have never been higher.As our conversation wrapped, Ron returned to the urgency. “Organizations don’t want partial solutions,” he said. “They want defenses that move at the speed of the threats.”In that sense, the mission is simple: Build security tools that scale with AI, not lag behind it. Architect for clarity, not complexity. And never forget—the Big Bang already happened.The question now is: who’s ready to build the shield?About Michael Matias:Michael Matias is the CEO and Co-Founder of Clarity, an AI-powered cybersecurity startup backed by venture capital firms including Bessemer Venture Partners and Walden Catalyst. Clarity develops advanced AI technologies protecting organizations from sophisticated phishing attacks and AI-generated social engineering threats, including deepfakes. Before founding Clarity, Matias studied Computer Science with a specialization in AI at Stanford University and led cybersecurity teams in Unit 8200 of the Israel Defense Forces. Forbes Israel recognized him early on, naming him to the exclusive 18Under18 list in 2013 and the Forbes 30Under30 list thereafter. Matias authored the book Age is Only an Int and hosts the podcast 20MinuteLeaders.
4 Sep 1h

The New Identity Crisis: A Conversation with Avihay Nathan, SVP of AI at CyberArk
In the age of AI agents, cybersecurity is shifting from focusing on identity to addressing agency. Autonomous agents, which act and reason like humans but operate at machine speed, are being created on the fly. Traditional identity management tools—like user directories and group policies—are ill-equipped to handle these ephemeral, unpredictable entities that can take action and disappear before a human can even react.CyberArk’s SVP and Head of AI, Data & Research, Avihay Nathan, describes this as an unprecedented challenge. His team is tackling it with a three-part framework:Secure from AI: Defending against new AI-driven threats.Secure with AI: Using AI to augment human defenders and reduce alert fatigue.Secure of AI: Protecting organizations from the AI systems they themselves are deploying.Many companies are overwhelmed by the rapid adoption of agents without understanding what data or systems these agents can access. This creates a trust crisis, and they are now looking to security vendors for solutions.CyberArk is mapping out a new agent lifecycle, from discovery (how many agents spun up?) to observability, access control, behavior monitoring, and governance. The key insight is that securing these agents requires understanding their context: what data they touch, what tasks they perform, and why they act in a certain way. An agent's behavior is often "zero-shot," meaning it can act without a history, so context is the only way to anchor and secure its actions.To build this new vision, CyberArk underwent its own transformation, shifting from a traditional company to an AI-native startup mindset. This involved creating a centralized AI and data group with full ownership and educating the entire organization on the importance of data.Avihay believes the proliferation of agents will continue as companies prioritize productivity. The new security promise is not just to block threats, but to enable innovation—helping organizations adopt these powerful new technologies both confidently and safely.
3 Sep 37min

AI and the End of Alert Fatigue: A Conversation with Tsion (TJ) Gonen, Cybersecurity Expert
By Michael Matias, CEO of Clarity and Forbes 30 Under 30 alumFor decades, cybersecurity vendors have armed defenders with dashboards filled with red alerts. But they rarely delivered solutions. As Tsion (TJ) Gonen put it in our recent conversation: “97% of tools just showed you a red screen that said, basically, ‘you suck.’” No remediation. No action. Just fatigue.That paradigm is ending—and AI is the catalyst.Gonen, a veteran cybersecurity leader and founder of Protego Labs (acquired by Check Point), has spent three decades watching the industry wrestle with inefficiency. Today, he sees a profound shift. “Security teams don’t want more tools,” he told me. “They want outcomes. They want closed loops.”This point echoes what Dorit Dor, CTO of Check Point, told me: AI isn’t just another layer of detection. It must act—autonomously and in real-time. At Clarity, we see this daily as we intercept deepfakes and AI-generated phishing attacks that move too fast for human intervention.But before the shift to AI, Gonen argues, the security industry was already in trouble. Too much noise. Not enough signal. “Cybersecurity reached a tipping point even before AI,” he said. “People began to question if the whole model was working.”AI, he believes, doesn’t just enhance existing tools—it reshapes the game. From reducing product development costs to enabling automated response, AI democratizes defense the same way it democratized attack. “The ability to close loops automatically is what makes AI transformational,” he said.It’s a point that also came up in my conversation with Shahar Peled of Terra Security. Shahar talked about replacing manual pentesting with agentic AI. Gonen’s vision is broader: don’t just find issues—resolve them, immediately and autonomously.Yet many founders, he warns, still fall into the same trap: mistaking incremental tech improvements for true breakthroughs. “Real differentiation isn’t just in the tech,” Gonen said. “It’s in how well you integrate into operational workflows.”This distinction—between technology and execution—reminds me of what Tsion (TJ) shares with leaders like Tom Mes: the CISO’s job has become impossibly complex. What they need now isn’t another blinking dashboard. It’s simplicity. Precision. And trust.Looking ahead, Gonen doesn’t expect a flood of brand-new threat categories. Instead, he sees opportunity in rethinking how we solve familiar ones. “You don’t have to invent new problems to build great companies,” he said. “You just have to solve the old ones better—with AI.”The future, he believes, belongs to startups that deliver true operational integration and seamless user experience—not flashy tech for its own sake. His call to entrepreneurs: don’t bolt AI onto your platform. Build with it from day one.My biggest takeaway? The next generation of cybersecurity leaders won’t be judged by how many alerts they generate. They’ll be judged by how many threats they prevent—automatically.We’ve reached a breaking point. Organizations that embrace AI as a core strategy—not just a feature—will define the next era of cybersecurity.About Michael Matias:Michael Matias is the CEO and Co-Founder of Clarity, an AI-powered cybersecurity startup backed by venture capital firms including Bessemer Venture Partners and Walden Catalyst. Clarity develops advanced AI technologies protecting organizations from sophisticated phishing attacks and AI-generated social engineering threats, including deepfakes. Before founding Clarity, Matias studied Computer Science with a specialization in AI at Stanford University and led cybersecurity teams in Unit 8200 of the Israel Defense Forces. Forbes Israel recognized him early on, naming him to the exclusive 18Under18 list in 2013 and the Forbes 30Under30 list thereafter. Matias authored the book Age is Only an Int and hosts the podcast 20MinuteLeaders.
3 Sep 46min

The Human Zero Day Series | Ep1164: Shirin Anlen: Building Trust in AI Media
Shirin Anlen traces her path from interactive storytelling to safeguarding human‑rights evidence at WITNESS. She explains how today’s scale, personalization, and ease of manipulation fuel “reality apathy,” empowering leaders to dismiss inconvenient truths as “AI.” Beyond any single tool, she argues for equitable standards, privacy‑aware provenance, usable detection, and media literacy that fits real newsroom and fact‑checking workflows—especially outside well‑resourced markets. The episode challenges tech and policy teams to treat trust as infrastructure and to involve the global majority in designing it.
1 Sep 21min

The Human Zero Day Series | Ep1163: Benjamin Corll: Human Risk in Cybersecurity
With decades in cybersecurity, Benjamin Corll has seen threat landscapes evolve from simple antivirus battles to AI-driven social engineering. For Corll, every breach traces back to people—both as the strongest defense and the weakest link. In this conversation, he unpacks the persistence of ransomware and business email compromise, the rise of AI-assisted fraud, and why criminals increasingly “log in” instead of “hack in.” He explains why defense in depth must integrate technology with human training, process checkpoints, and zero trust principles. Most importantly, Corll shares how to make awareness programs relevant, engaging, and role-specific—turning employees into active sensors who strengthen security culture from within.
27 Aug 32min

The Human Zero Day Series | Ep1162: Tal Hassner: Rethinking Deepfake Defense
From disappearing people in videos in the late 1990s to shaping AI and vision systems at Intel, Tal Hassner, Chief Scientist for Computer Vision, has watched synthetic media evolve from lab curiosities to global challenges. In this conversation, he dismantles the idea that “is it fake?” is the central question for security. Instead, he lays out a three-pillar approach—provenance, attribution, and integrity—that can outlast the arms race between deepfake creators and detectors. Drawing on decades of research and product experience, Hassner explains why harmful content remains harmful regardless of its origin, and why enterprises must shift from chasing fakes to building trust into their systems from the ground up.
21 Aug 41min