The AppSec Model Was Built for a World That's Disappearing.
AI code generation broke the math. Clover Security is building for what comes next.
Welcome to The Cybersecurity Pulse (TCP)! I’m Darwin Salazar, Head of Growth at Monad and former detection engineer in big tech. Each week, I bring you the latest security innovation and industry news. Subscribe to receive weekly updates! 📧
A Personal Note
Every now and again I come across a security vendor doing genuinely innovative and differentiated work. A breath of fresh air in a space full of copy/paste products with different logos.
I vet every TCP sponsor carefully to ensure I only put the best in front of you so if I’m writing a deep dive on a company, know that it’s passed a high bar. Clover Security brought me back to my Product Security days at Johnson & Johnson, where design-phase security reviews and threat modeling weren’t optional best practices, they were FDA and HITRUST requirements. This is a platform that would have made life meaningfully easier back then.
I’m excited to bring you this piece. Let’s get into it.
Executive Summary
The engineering world is undergoing a structural shift. AI coding agents now handle implementation at a pace and volume that human-led security processes were never designed to match. Innovation cycles are compressing. The response from the most forward-thinking engineering orgs isn’t to scan more code faster. It’s to invest in design and architecture rigor, the phase where most risk actually originates.
Clover Security made this bet two years ago. They’ve raised $36M and built a platform of AI agents that embed security into the design phase, where PRDs are written, architecture decisions are made, and threat models should live.
This deep dive covers why the design phase has become the critical gap in most security programs, how Clover’s platform actually works, what their early customer results look like, and where the product is headed as agentic development accelerates.
The Perfect Storm
Two years ago, a senior engineer would split their week between writing code, reviewing code, debugging code, and making architectural decisions. Today, AI coding agents handle the first three. Spotify’s co-CEO said it plainly on their Q4 earnings call. The company’s best developers haven’t written a single line of code since December. That trend is only accelerating.
What’s emerging is a fundamental reprioritization across engineering orgs. The smartest teams are investing in design and architecture rigor, the phase where risk actually enters the system. Innovation cycles are compressing. A feature that took a team two sprints now ships in days. The human role in software development is shifting from writing code to designing systems, choosing infrastructure, and deciding how services interact. The design decisions that determine whether a system is secure are now the most consequential part humans still control.
And this math was broken long before AI code generation entered the picture. Product security engineers are outnumbered by developers 300-to-1 at most technology companies. CrowdStrike’s 2024 State of Application Security Report quantifies the downstream impact. Over half of major code changes don’t undergo full security reviews (50% median, 54% mean). That was before AI-generated code tripled the volume. Jerry Gamblin’s 2025 CVE data review counted 48,185 published vulnerabilities, a 21% jump over 2024, with XSS still the most common class at over 8,000 entries despite decades of tooling investment. FIRST’s 2026 forecast projects that number climbing to roughly 59,000.
Clover Security saw this coming. Founded in 2023, right as the AI coding wave began to take shape, the company raised $36M on the thesis that the entire AppSec model was pointed at the wrong phase of development.
Their bet is that AI agents embedded in design and architecture workflows are the only way to close the gap. Two years in, the market is moving in their direction.
Now they’re building for what the next two years look like.
The Founding Story
Clover’s founding story starts with a big tech engineer and a product leader at two of the largest AppSec vendors in the world, both independently arriving at the same conclusion that scanning code after it’s written was never going to be enough. Then they found each other on X.
Alon Kollmann (CEO) spent 15+ years as an engineer at Microsoft and Google before taking strategic roles at Hysolate and Dazz, the ASPM company acquired by Wiz for roughly $450M.
Or Chen (CPO) spent 8 years in Unit 8200 leading technical cyber operations, then founded a startup acquired by Checkmarx. He rose to VP and built their SCA and API Security offerings from the ground up.
The two found each other the way a lot of good things start in 2022: sliding into each other’s DMs on Twitter. They had arrived at the same conclusion from opposite sides of the security stack, and connected right as ChatGPT launched and the AI coding revolution started to take shape. The timing validated their shared conviction: code generation would be automated, making design the true security chokepoint. They co-founded Clover in 2023.
Clover raised $36M across a seed led by Team8 and a Series A led by Notable Capital, with Team8 and SVCI participating. The angel roster tells the real story: Wiz co-founders Assaf Rappaport and Yinon Costica, Shlomo Kramer (Check Point, Imperva, Cato Networks), Rene Bonvanie (former CMO, Palo Alto Networks), and senior executives from Snyk, CrowdStrike, Atlassian, and Google.
You Can’t Scan Your Way Out of Insecure Design Choices
Every AppSec tool on the market today works the same way, downstream: wait for code to exist, then scan it. SAST, SCA, DAST, ASPM, runtime scanning.
Product Security (ProdSec) operates upstream. Architecture review, threat modeling, design-phase risk assessment. Security baked into how a system is designed, not bolted on after it’s built. If you’ve worked in automotive or medical devices, none of this is new. ProdSec is table stakes in those worlds.
I interned on Ford Motor’s Red Team and spent 2.5 years doing product security for a robotic surgical system at Johnson & Johnson. When you’re threat modeling software that controls a robot performing surgery on a human being, design-phase security reviews are a non-negotiable. The gap is that most pure software companies haven’t adopted this discipline yet, and the ones that have are doing it manually, expensively, and at a coverage rate that never keeps pace with engineering velocity.
In practice, ProdSec means a security engineer sits down with a PRD or architecture doc before a single line of code is written and asks: where are the trust boundaries? What are the data flows? Where could business logic be abused? How does this feature interact with existing services? They map the design against frameworks like OWASP ASVS or STRIDE, identify threats at the architecture level, and write security requirements into the ticket.
Here’s why this matters concretely. A team builds a payments integration where users can link external bank accounts. The code is solid: encrypted connections, proper auth tokens, passes every scan. But the design never accounted for what happens when a user links an account, initiates a transfer, then unlinks the account before settlement completes. The transaction goes through with no account to claw back from. That’s not a vulnerability. It’s a business logic gap that only exists at the design layer, and no scanner on the market catches it.
The problem is that ProdSec requires people who understand both software architecture and threat modeling deeply enough to review designs at speed. Those people are rare. Most orgs either can’t hire them or can’t hire enough of them, which is why even mature security teams end up triaging by risk and only reviewing the highest-priority features manually.
As AI handles more implementation, design is the last human-controlled artifact. If security isn’t embedded there, there’s no checkpoint before code ships. Clover’s bet is that as AI eliminates trivial code-level vulnerabilities, the remaining risk concentrates at the logic and architecture layer.
How It Works
Clover runs eight purpose-built AI agents, each handling a specific security function but all built on the same platform core. Design risk is the guiding principle, but risk can initiate at any point. A review can start from a PRD for a brand new feature just as easily as from code drifting from its original design requirements. The result is a platform that tracks design risk from the first spec through production, and catches drift long after code ships.
The Context Layer
The most differentiated thing I saw in the demo was Clover’s context engine. Two components work together here.
The Memory Agent builds and maintains a living knowledge base of your organization, split across three dimensions. Technical context covers your tech stack, infrastructure components, APIs, and data points. Business memory captures how your org makes decisions, internal glossary, and product relationships. Inferred memory surfaces connections the platform identifies across your environment over time.
The Feature Context Graph maps how a single feature connects to requirements, framework standards, code, and infrastructure. You can drill into the specific standards a feature was reviewed against, see which code repos are linked, and trace from design doc to implementation.
These two components are what make everything below possible. Without organizational context, an AI agent reviewing a design doc is just guessing. With it, the agent knows your tech stack, your policies, your architecture patterns, and how this feature connects to everything else you’ve already built.
Scenario 1. A new feature lands in your project management tool.
Your product team writes a PRD in Notion for a new instant peer-to-peer payments feature. Clover’s Discovery Agent picks it up automatically, identifies it as high-priority based on the financial data flows and regulatory surface involved, and flags it for security review.
The Design Review Agent takes over and runs a security review against your configured frameworks and threat models, whether that’s OWASP ASVS, PCI, STRIDE, or your own internal standards. Because of the context layer, Clover knows this feature interacts with your existing account linking service, handles external bank credentials, and exposes a new transaction initiation path to end users. The review reflects that specificity rather than returning generic findings.
Business logic flaws are a major focus. Or walked me through examples like logic gaps that let attackers siphon funds from a gaming platform, and arbitrage attacks on a prediction market. These are the exact categories of risk that traditional scanners miss because they have no awareness of what the feature is supposed to do, only what the code does.
On the app level, security teams manage risk posture through custom-built security models around applications, architecture, data flows, and risks. Each application view lets security teams tune prioritization sensitivity across three dimensions. Risk, Business Impact, and Depth and Complexity. You can also feed it pentest reports and it incorporates findings into the posture view.

Scenario 2. Code ships that drifts from the original design.
Your design spec requires encryption at rest for all candidate records. A developer (or an AI coding agent) implements the database layer but skips the encryption step.
The Developer Guidance Agent integrates with GitHub, GitLab, and Bitbucket, and compares implemented code against original design specifications and PRDs. It surfaces the drift between what was designed and what was built. Traditional SAST tools see code in isolation. They’d look at this implementation and find no vulnerability pattern, no dependency flaw, no known CVE. Clover sees the intent behind the code.
Scenario 3. Your developers are using Cursor, Codex and Claude Code. Who’s watching?
This is the forward-looking bet. Clover’s MCP Agent provides visibility into AI-generated code, enforces organizational policies for coding agents, and monitors MCP connections across AI-driven development workflows. The Vibe Coding Agent evaluates shadow AI and vibe coding for misconfigurations, excessive permissions, and missing controls.
What makes this tangible rather than theoretical is the Agent Observability dashboard. Clover gives you granular visibility into which LLMs are being used across your environment (Claude 3.5 Sonnet, GPT-4, GPT-4-turbo in this instance), which developers are using them, lines of code written with coding agents over time, how many MCP connections are active, and where the PR blind spots are.

Most security teams today have zero visibility into what AI coding agents are generating across their org. Engineering teams are three steps ahead of security teams on AI coding agent adoption. Clover has built an observability layer that helps make the invisible visible, which models, which developers, how much code, and where the blind spots are
Integrations and Day One Value
Clover hooks into where teams already work.
The platform also functions as a contextual security chatbot within Slack, where teams can ask questions like “What auth method should I use for this service?” and get answers informed by their org’s specific context and policies.
Clover produces actionable security reviews from day one, covering framework checks, threat modeling, and architecture anti-patterns out of the box. As it ingests more documentation and observes how teams build, reviews get sharper and more tailored to your specific environment.
From Stealth to Scale
Most security startups launch publicly, spend heavily on marketing, and grind through 12-month enterprise sales cycles before landing their first logos. Clover did it backwards. They hit millions in ARR before publicly launching. No website, no press, no conference booths. Deals came through CISO networks and word of mouth, which tells you something about how the product landed with the people actually using it.
Neo4j went from 49% to 100% design review coverage. CISO David Fox: “Manual review covered 49% of tickets, but with Clover’s automation we hit 100%.” (Full case study)
Lemonade cut review time from roughly two hours to fifteen minutes. Before Clover, Lemonade triaged reviews by perceived risk because the volume was too high to cover everything. Now they review all documents, not just the ones they think are risky. (Full case study)
Virgin Money, one of the UK’s largest retail banks serving 6.6 million customers, achieved 4x faster design reviews. But the bigger story is what changed qualitatively. Before Clover, reviews depended on individual interpretation across hundreds of policy controls. Now every design is reviewed against the same standard, every time. Head of Security Solutions Gordon Moon: “Clover turns generic threats into design-specific threats so our teams understand what really matters in that system.” (Full case study)
The fact that CISOs at Neo4j, Lemonade, and Virgin Money are willing to go on record with specific metrics at this stage is strong signal for both company direction and product validation. Excited to see how these numbers evolve as deployments scale.
Closing Thoughts
I’ve covered a lot of security startups through TCP. Very few make me stop and rethink how an entire category should work.
Where the bet gets bigger. AI coding agents are already writing production code across thousands of engineering orgs. Most security teams have zero visibility into what those agents are generating, which models they’re using, or whether the output aligns with internal policies. That’s not a future problem.
Clover’s MCP Agent and Vibe Coding Agent are newer than the design review core, but they’re pointed at exactly the right problem. The agent observability dashboard alone is the kind of visibility security leaders will be demanding within 12 months. Clover is building it now.
Who should care. If you’re a CISO or Head of ProdSec at a company with 50+ engineers and you’re still doing design reviews manually (or not doing them at all), Clover should be on your shortlist. If your team is drowning in review requests and triaging by gut because you can’t cover everything, this is built for you.
Product Security as a discipline has historically been locked behind scarce human expertise. The security architects who can review a design doc, run a threat model, and identify business logic flaws before code is written are among the hardest hires in the industry. Most companies either can’t find them or can’t afford enough of them. If Clover’s agents can replicate that thinking at scale, they’re not just building a product. They’re making ProdSec accessible to teams that could never afford to staff it the way it deserves.
Having done this work by hand, I can tell you: this is the platform I wish I’d had.
I've shown you the thesis and where I think things are headed. Let them show you the product.









