AI Sovereignty: The Fourth Risk Layer You Haven’t Audited
In March 2026, Germany’s Federal Office for Information Security confirmed that a pilot project using DeepSeek-V3 for automated parliamentary briefing summaries had transmitted classified metadata to DeepSeek’s server cluster in Shanghai. Not the documents themselves – the metadata: document classification levels, internal committee codes, timestamped access logs. The German government did not realize its AI tool was sending classified signals to China until after the damage was done.
European security leaders have spent years auditing the infrastructure layer for exactly this kind of exposure: mapping jurisdictional risk, checking vendor headquarters, rethinking cloud dependencies. That work matters. But it addresses the network layer. The DeepSeek incident operated one layer higher. The AI layer is now in scope – and most organizations have not started auditing it.
The Problem You Thought You Solved Is Not Solved
In recent years, many security leaders have done the hard work: audited vendor headquarters, mapped jurisdictional exposure, rethought cloud dependencies, pushed SASE providers to clarify where data is stored and under which law. That work addresses a specific layer of the stack – the network and infrastructure layer.
AI is now deeply embedded one layer above that. Threat detection, alert triage, policy recommendations, anomaly detection – all of these are increasingly driven by models. And most organizations accepted those AI dependencies the same way they accepted cloud infrastructure dependencies five years ago: quickly, without fully understanding what they were agreeing to.
Organizations that have done sovereign architecture work at the network layer have often left the AI layer completely unexamined.
Jurisdiction Follows the Model, Not Just the Data
The core principle from any serious CLOUD Act analysis is this: jurisdiction follows the company, not the data. It does not matter if your data is stored in Frankfurt if your provider is headquartered in Seattle. The parent company’s legal jurisdiction reaches the data regardless of where it physically sits.
The same logic applies to AI models – and DeepSeek illustrates it precisely. DeepSeek’s privacy policy states that data is stored on servers in the People’s Republic of China. Chinese law grants Beijing broad authority to access data held by Chinese-headquartered companies. Even when DeepSeek was used inside EU member states, API traffic routed through mainland Chinese data centers. The model’s jurisdiction followed the model, not the user’s location.
US federal agencies – NASA, the US Navy, and others – banned DeepSeek on national security grounds. Italy, Australia, South Korea, and Taiwan all implemented restrictions. The pattern is consistent: once regulators understood where inference data was going, the response was to ban or restrict access. The organizations that had already deployed DeepSeek in production environments found themselves scrambling after the fact.
This is the supply chain argument applied one layer higher. Not your SASE vendor this time. Your AI model.
The Audit Gap That Most Security Teams Are Missing
Most organizations running vendor jurisdiction audits today are checking the right things: Where is my SASE provider incorporated? Where does my cloud platform store telemetry? Who is the parent company of my managed services provider? These are good questions. The Solvinity acquisition in the Netherlands – where a sovereign cloud choice became a US-jurisdiction exposure overnight – showed why they matter.
But ask those same organizations which AI models are embedded in their security stack, where those models run, and what those providers’ data retention policies are for inference data. Most cannot answer. Security operations platforms have been integrating large language models rapidly and without scrutiny. Vendors have shipped AI-assisted alert triage, natural language query interfaces, and automated policy tuning – often built on top of third-party model APIs that inherit the jurisdictional exposure of whatever company built them.
The DeepSeek incident was visible because it involved a known Chinese AI provider. But the same structural risk exists with any model hosted outside the jurisdiction your organization operates under, if you have not verified where inference data flows, who can access it, and under what legal framework.
What Sovereign AI in Security Actually Means
Sovereign AI in security does not necessarily mean running models locally on-premises – in most enterprise environments that is impractical, and the performance trade-offs are significant. It means something more operational: knowing which model is making which decision, where it runs, who has legal access to the inference data, and what the operational philosophy is when a model’s output drives a security action.
There is a meaningful difference between a deployment model where AI assists human analysts – surfacing signals, accelerating decisions – and one where AI decides by default and humans review exceptions. In the first model, the risk exposure of a compromised or legally accessible model is bounded. In the second, it is not.
A practical test: if a foreign government presented your AI vendor with a lawful data request, what would they hand over? Inference logs? Query histories? Anomaly detection patterns extracted from your environment? If you cannot answer that question, you have an unaudited risk in your stack.
What Security Leaders Should Audit Now
- Map the AI in your security operations stack. List every AI-assisted capability in your security tooling: alert triage, threat detection, policy recommendations, automated response. For each one, identify the underlying model and who operates it. Many security platforms embed third-party models; the AI capability and the model provider are often different companies.
- Apply the same jurisdiction test you applied to your SASE vendor. Where is the model provider headquartered? Where does inference data flow? What does the provider’s privacy policy say about data retention, government access, and data residency? If those questions are not answered in the contract, they need to be.
- Check whether inference data from your environment is being used for model training. Several AI providers retain the right to use API interaction data to improve their models. In a security context, that means signals from your environment – query patterns, alert categories, even network telemetry – could be incorporated into models that other customers use. That is a data leakage risk with a different profile than traditional exfiltration.
- Ask how your managed security or SASE provider handles AI decision transparency. Can they tell you which models are involved in which decisions, where those models run, and what data those models receive? At Open Systems – Swiss-headquartered, outside US CLOUD Act reach and Chinese legal authority – AI-driven capabilities run inside a managed SASE architecture where we can account for what runs, where it runs, and which decisions are made by tooling versus by people. Operational transparency at the AI layer is not a bonus feature. It is a basic requirement.
The Stack Goes Higher Than You Thought
The work European security leaders have done on data sovereignty at the infrastructure layer is real and it matters. But sovereignty is not a problem you solve once at one layer of the stack and leave.
AI is now part of your security infrastructure. The DeepSeek case was not a fringe incident involving a reckless organization. It was a government pilot project, handled by people who understood the stakes, that still exposed classified signals to a foreign jurisdiction because the AI layer was not under the same scrutiny as the infrastructure below it.
The fourth risk is here. The question is whether your audit covers it.
Sovereignty at the network layer is necessary but no longer sufficient. If your current architecture review does not include the AI layer, that is the gap worth addressing first. Reach out to the Open Systems team to discuss what a sovereignty audit that covers the full stack looks like for your organization.
Leave Complexity
Behind
To learn how Open Systems SASE Experience can benefit your organization, talk to a specialist today.
Contact Us
Stefan Keller, Chief Product Officer
