Platforms are securing models and runtime agents. But the real risk may start earlier—in the behavior of the coder, not the code.
Inspired by Palo Alto Networks’ recent announcement of Prisma AIRS, the conversation around AI security is evolving. But as platforms move to protect prompts, models, and agents, a critical question remains overlooked:
Who—or what—is writing your code in the first place?
As AI-native development becomes the norm—code co-authored by LLMs, pipelines governed by agents, and architecture shaped by prompts—the attack surface isn’t just expanding. It’s shifting.
At Archipelo, we define a developer as anyone—or anything—that contributes to code.
That includes human engineers, LLMs, autonomous agents, and hybrid workflows.
In today’s pipelines, the boundary between AI and human authorship is blurring—and with it, the security model must evolve.
As the former Global CISO at Palo Alto Networks, I saw firsthand how security architectures matured to defend runtime, identity, and infrastructure.
But AI development isn’t just another layer—it’s a shift in where trust begins.
And here’s what we’re seeing now:
In an era of AI coding, developers—both human and AI—have become an emerging and overlooked attack surface.
And without visibility into who’s contributing what, how, and when, teams are flying blind to risks across the SDLC—before, during, and after code is committed.
Last week, Palo Alto Networks announced Prisma AIRS, a platform for securing the AI lifecycle—from prompt to model to agent. Backed by their acquisition of Protect AI, this move is a validation of something we at Archipelo have believed for years:
AI is changing how software is built.
And trust in the pipeline cannot begin at runtime.
Most platforms focus on what happens after code is written:
Scanning artifacts. Validating runtime behavior. Enforcing policy across infrastructure.
But what about how that code was created?
These are not runtime questions.
They’re questions of authorship, accountability, and developer behavior.
And in hybrid pipelines where AI and humans collaborate at the code layer, ignoring that behavioral layer leaves teams exposed.
At Archipelo, we call this Developer Security Posture Management (DevSPM). It provides visibility into:
This isn’t about scanning code.
It’s about understanding the developer pipeline itself—before code is committed, shipped, or deployed.
DevSPM is the layer traditional ASPM, CNAPP, or SIEM platforms weren’t built to see.
And we believe it’s the missing trust layer in secure-by-design architectures.
While vendors are starting to acknowledge AI development risks, most are still focused after the fact—on runtime models, prompts, interfaces, and endpoints.
But what’s still missing?
Visibility into the authorship layer—human and AI alike.
Without it, teams are flying blind to who’s contributing what, how, and when. And that’s a problem.
We’re not the only ones seeing the risk move upstream. But we are one of the few building observability where it matters most: before code is committed.
Because in a world where your AI can write your code, run your tests, and deploy itself…
The trust layer has to start at the source.
That’s exactly what DevSPM is built for.
If these risks resonate with your team or strategy, I invite you to join our next closed-door CISO Briefing at:
https://www.blindspotbriefings.com
It’s not a demo.
It’s a private conversation about what’s coming next in developer + AI security.
About the Authors:
This article was authored by Paul Calatayud, CISO & Chief Strategy Officer at Archipelo.
Archipelo helps organizations ensure developer security, resulting in increased software security and trust for your business.
Try Archipelo Now