AI Security Has Two Perimeters: The Model and the Coder

Matthew Wise · Nov 18, 2025

Hyperscalers lead model security because they control the research, training pipelines, and AI development environments.

Cybersecurity platforms lead enterprise defense because they control detection, runtime surfaces, and post-deployment correlation.

But neither group has addressed the upstream attack surface that now determines software security: the coder—human or AI—who creates it.

The Blind Spot at the Center of AI Security

Over the last decade, two sectors accumulated significant influence in global security:

  • Hyperscalers like Google, Microsoft, AWS, and OpenAI, with deep formal AI research.
  • Cybersecurity platforms like Palo Alto Networks, CrowdStrike, Wiz, and SentinelOne, with dominant enterprise GTM and large post-deployment control planes.

Each group is positioned to lead in a different direction.

Each has a structural strength.

But both inherited a security model from a world that no longer exists.

The industry still treats security as something that begins after code is written:

  • after the commit
  • after the build
  • after the deploy
  • after runtime behavior is analyzed

But AI has shifted software development upstream.

Code is no longer produced solely by a developer sitting at a keyboard.

It is authored, modified, and propagated by:

  • human coders
  • AI copilots
  • autonomous agents
  • CI/CD bots
  • third-party automation
  • package maintainers across global supply chains

This creates two distinct perimeters:

  • the model
  • the coder — human or AI

The first perimeter is heavily defended.

The second is largely unobserved.

This is the structural gap.

Why Hyperscalers Are Positioned to Lead the Next Era of AI Security

1. Formal Research Infrastructure — they employ teams who study:

  • machine reasoning
  • adversarial learning
  • data poisoning
  • hallucination failure modes
  • agentic behavior
  • supply-chain model risk
  • reinforcement-based vulnerabilities

These are the failure modes that will define the next decade.

2. Visibility Across AI Workflows — Hyperscalers control the surfaces where AI generates code:

  • GitHub
  • VS Code
  • Copilot
  • Gemini Code Assist
  • Replit
  • cloud IDEs
  • managed CI/CD

They see the shift from human-only authorship to hybrid authorship.

3. Ability to Instrument Models and Toolchains  — only hyperscalers can embed safeguards inside:

  • the model
  • the IDE
  • the coding workflow

These structural advantages position them to lead model-layer and workflow-layer AI security.

Why Cybersecurity Platforms Will Struggle to Catch Up

Cybersecurity hyperscalers excel at:

  • enterprise GTM
  • perimeter defense
  • runtime correlation
  • cloud misconfiguration
  • post-deployment detection
  • incident response
  • log aggregation
  • threat intelligence

But these systems operate after code exists.

Their architectures do not reach into:

  • authorship
  • developer behavior
  • AI-generated logic pathways
  • agent-driven modifications
  • upstream identity drift
  • toolchain influence
  • cross-agent propagation

Because innovation often arrives via acquisition rather than foundational research, deep AI-native primitives are difficult to build.

This is a structural limitation.

Security platforms that were designed to protect infrastructure cannot easily pivot to protect AI software creation.

And this is where the new risk lives.

The Old Security Paradigm Has Reached Its Limit

The last 20 years of cybersecurity were built on four pillars:

  • firewalls
  • identity platforms
  • endpoint detection
  • cloud posture management

These defend the runtime world.

But AI development has changed the shape of risk.

Instead of asking:

“Is this code secure?”

We must now ask:

“Who authored this code, under what conditions, using what tools, and does their behavior align with their identity?”

When an AI agent generates, merges, or deploys code, runtime systems have no context for:

  • authorship
  • lineage
  • behavioral deviation
  • risk posture
  • provenance
  • contribution timing
  • tool influence

These questions cannot be answered without observing the coder.

Why This Problem Is Unsigned Territory

The upstream attack surface is visible but unclaimed.

No hyperscaler owns it.

No cybersecurity platform owns it.

No AppSec, SIEM, XDR, CNAPP, or ASPM platform reaches this deep.

And no model-level AI safety method captures developer behavior.

The least defended point is the coder—human or AI—whose actions create all downstream artifacts.

This is the structural inversion.

The Shift the Industry Is Not Prepared For

AI has created a new reality:

  • code authored by multiple actors (human + AI)
  • AI agents generating artifacts at machine speed
  • developer identities that are hybrid (human + AI) and distributed
  • package ecosystems propagating compromise instantly
  • fragile provenance
  • silent behavior drift
  • upstream identity compromise cascading downstream

This creates a new mismatch:

behavior vs. identity drift

Traditional tools cannot see this mismatch because they observe infrastructure, not authorship.

Why This Should Concern Hyperscalers and Cyberscalers

Whoever solves this upstream perimeter becomes the control plane for AI-native software security.

It will not replace existing tools.

It will sit above them.

It will provide upstream context every other system depends on.

For hyperscalers, this aligns with their AI research depth and toolchain control.

For cybersecurity platforms, it challenges their architectural assumptions.

The Strategic Implication

The next major category in security will be the system that governs:

  • coder behavior
  • AI agent activity
  • authorship provenance
  • deviation from established behavioral patterns
  • upstream identity risk
  • toolchain influence
  • code lineage before commit

This is where breaches originate.

And it remains the largest unsolved surface in the enterprise.

Conclusion

AI security cannot begin at the model.

And it cannot begin at the cloud.

It must begin at the coder.

Hyperscalers will recognize this.

Cybersecurity platforms will adapt to it.

Boards will ask a new question:

“How do we secure the people and AI systems who create our software?”

There is only one direction the industry can move:

Upstream.

To the source.

To the coder.

Book a live demo and see how Archipelo helps teams align velocity, accountability, and security at the source.

Get Started Today

Archipelo helps organizations ensure developer security, resulting in increased software security and trust for your business.

Try Archipelo Now