Why AI Pipelines Are Forcing a Rethink of Security and Developer Identity

Paul Calatayud · Jun 17, 2025

Platforms are securing models and runtime agents. But the real risk may start earlier—in the behavior of the coder, not the code.

Inspired by Palo Alto Networks’ recent announcement of Prisma AIRS, the conversation around AI security is evolving. But as platforms move to protect prompts, models, and agents, a critical question remains overlooked:

Who—or what—is writing your code in the first place?

From Runtime to Authorship: Where Risk Actually Begins

As AI-native development becomes the norm—code co-authored by LLMs, pipelines governed by agents, and architecture shaped by prompts—the attack surface isn’t just expanding. It’s shifting.

At Archipelo, we define a developer as anyone—or anything—that contributes to code.
That includes human engineers, LLMs, autonomous agents, and hybrid workflows.

In today’s pipelines, the boundary between AI and human authorship is blurring—and with it, the security model must evolve.

As the former Global CISO at Palo Alto Networks, I saw firsthand how security architectures matured to defend runtime, identity, and infrastructure.

But AI development isn’t just another layer—it’s a shift in where trust begins.

And here’s what we’re seeing now:

In an era of AI coding, developers—both human and AI—have become an emerging and overlooked attack surface.

And without visibility into who’s contributing what, how, and when, teams are flying blind to risks across the SDLC—before, during, and after code is committed.

Securing Runtime Isn’t Enough Anymore

Last week, Palo Alto Networks announced Prisma AIRS, a platform for securing the AI lifecycle—from prompt to model to agent. Backed by their acquisition of Protect AI, this move is a validation of something we at Archipelo have believed for years:

AI is changing how software is built.
And trust in the pipeline cannot begin at runtime.

The Invisible Gap: Developer Behavior (Human + AI)

Most platforms focus on what happens after code is written:
Scanning artifacts. Validating runtime behavior. Enforcing policy across infrastructure.

But what about how that code was created?

  • Who actually wrote this function?
  • Was it generated by a model?
  • Was it reviewed by a human?
  • Did it introduce divergence from prior system behavior?

These are not runtime questions.
They’re questions of authorship, accountability, and developer behavior.

And in hybrid pipelines where AI and humans collaborate at the code layer, ignoring that behavioral layer leaves teams exposed.

DevSPM: Observability for the Developer Layer

At Archipelo, we call this Developer Security Posture Management (DevSPM). It provides visibility into:

  • Authorship – who or what contributed
  • Actions – what changed, and how
  • Behavior – how contributions evolve over time in hybrid teams

This isn’t about scanning code.

It’s about understanding the developer pipeline itself—before code is committed, shipped, or deployed.

DevSPM is the layer traditional ASPM, CNAPP, or SIEM platforms weren’t built to see.

And we believe it’s the missing trust layer in secure-by-design architectures.

What Most AI Security Platforms Are Still Missing

While vendors are starting to acknowledge AI development risks, most are still focused after the fact—on runtime models, prompts, interfaces, and endpoints.

But what’s still missing?
Visibility into the authorship layer—human and AI alike.

Without it, teams are flying blind to who’s contributing what, how, and when. And that’s a problem.

Developer Trust Is the New Security Perimeter

We’re not the only ones seeing the risk move upstream. But we are one of the few building observability where it matters most: before code is committed.

Because in a world where your AI can write your code, run your tests, and deploy itself…

The trust layer has to start at the source.
That’s exactly what DevSPM is built for.

Join the Conversation

If these risks resonate with your team or strategy, I invite you to join our next closed-door CISO Briefing at:

https://www.blindspotbriefings.com

It’s not a demo.

It’s a private conversation about what’s coming next in developer + AI security.

About the Authors:

This article was authored by Paul Calatayud, CISO & Chief Strategy Officer at Archipelo.

Get Started Today

Archipelo helps organizations ensure developer security, resulting in increased software security and trust for your business.

Try Archipelo Now