Zero Trust at Scale: A Reality Check for Practitioners
Godfrey Maiwun · September 2025 · Security Architecture · 13 min read
Zero Trust has completed its journey from fringe principle to industry consensus in under a decade. That speed is impressive and problematic in equal measure. Consensus arrived faster than understanding, and a generation of Zero Trust programmes now look right on paper without changing actual access behaviour.
The checkbox problem
Walk through almost any enterprise security programme today and you will find Zero Trust in the strategy deck. You will find it in the vendor contracts, in the compliance reports, in the board presentation. What you will find less consistently is a clear answer to the question: who was able to access what yesterday that they should not have been able to access, and how would you know?
That question is the test of whether Zero Trust is real or cosmetic. The model — formulated by John Kindervag at Forrester in 2010 and refined through NIST SP 800-207 — is straightforward in principle: assume no implicit trust, verify every access request explicitly, operate on least privilege, and inspect everything. The difficulty is not understanding the principle. It is implementing it in organisations that have a decade of implicit trust already baked into their infrastructure.
What Zero Trust actually requires
Identity is the new perimeter. This is the most-repeated phrase in Zero Trust discussions and also the most consequential. It means that access decisions are made based on verified identity, device health, and context — not based on network location. A user on the corporate network with a compromised endpoint should have the same access (none) as an attacker who has penetrated that network. Achieving this requires a mature identity provider, device management that produces trustworthy health signals, and policy enforcement points that can evaluate those signals at access time.
Identity, microsegmentation, continuous verification — the three pillars.
Most organisations are partway there. They have an identity provider — Azure AD, Okta, Google Workspace. They have MFA deployed, at least for critical systems. But the policy enforcement layer — the part that says "this user, on this device, in this context, gets access to exactly this resource and nothing else" — is incomplete. Lateral movement remains possible because implicit trust on the internal network has not been fully eliminated.
Microsegmentation is the hardest part. Network microsegmentation — dividing the environment into small segments with explicit access controls between them — is architecturally sound and operationally brutal. Every legitimate communication path has to be mapped, every service-to-service dependency has to be documented, and every access policy has to be maintained as the environment changes.
The gap between Zero Trust on a slide and Zero Trust in production is mostly a microsegmentation gap. And microsegmentation is not a technical problem — it is an organisational one.
The technical tools for microsegmentation exist: cloud security groups, service mesh mTLS, host-based firewalls, software-defined perimeters. The challenge is the discovery and mapping phase. Most organisations do not have accurate, up-to-date documentation of what communicates with what. Running a microsegmentation project forces that documentation to exist — and the process of creating it usually reveals connections that no one knew about, permissions that are far too broad, and services that have not been decommissioned but are still accepting traffic.
Continuous verification is not the same as authentication. Zero Trust is often described as "never trust, always verify" — but the "always verify" part is frequently implemented as "verify at authentication time and then maintain a session." That is not continuous verification. Genuine continuous verification evaluates access context throughout a session: device health can change, location can change, behaviour can become anomalous. These signals should be evaluated continuously, not just at login.
Implementing this requires integration between identity, device management, and threat detection — a level of platform integration that is commercially available (from Microsoft, Google, Zscaler, and others) but requires meaningful configuration and tuning to work in practice.
Where programmes quietly fail
The legacy estate problem. Zero Trust architectures assume identity-aware clients and policy enforcement points. Legacy systems — mainframes, OT equipment, applications that authenticate with hardcoded service accounts, systems that cannot run an agent — do not participate in identity-based access controls. Every organisation has some of this estate, and it creates a gap that the Zero Trust architecture quietly routes around. Acknowledging the gap is the first step; compensating controls (network isolation, enhanced monitoring, access gateways) are usually the realistic answer.
The gap between Zero Trust on a slide and Zero Trust in production.
The SaaS blind spot. If your workforce uses 50+ SaaS applications — which is typical for a knowledge-work organisation — you have 50+ access control surfaces that are not your identity provider. Cloud access security brokers (CASBs) help, but their visibility is limited to traffic that flows through them. Unmanaged devices, browser-based access, and personal accounts used for work are all vectors that a perimeter-focused Zero Trust implementation misses entirely.
The change management problem. Zero Trust, done properly, restricts access. It makes the implicit explicit, and the explicit is often narrower than people assumed they had. Service accounts that were connecting to everything are restricted to what they need. Developers who SSH'd into production systems find those paths removed. This produces resistance — from engineering teams who feel slowed down, from operations teams who find monitoring harder, from users who notice friction that was not there before.
Managing this change is as important as the technical implementation. Security teams that implement Zero Trust without a communication and change management plan will find their carefully designed policies bypassed, exceptions accumulated into meaninglessness, and the programme abandoned in all but name within 18 months.
A framework for genuine adoption
Start with identity maturity. If you cannot answer "who accessed what, from which device, when, and was that access appropriate?" for your highest-value systems, identity infrastructure is the foundation to fix before anything else.
Map before you segment. Microsegmentation without an accurate dependency map creates outages. Invest in discovery — network flow analysis, application dependency mapping, service mesh observability — before enforcing policy.
Pick a pilot scope. Zero Trust across the entire estate, all at once, is a programme that fails. Zero Trust for a specific application, a specific user population, or a specific network segment is a programme that can succeed, demonstrate value, and expand. NIST SP 800-207 recommends exactly this approach: pick an enterprise resource and build the protect surface.
Measure lateral movement reduction, not project milestones. The outcome of Zero Trust is that attackers who enter one part of your environment cannot reach the high-value systems in another part. Measure that. Run red team exercises specifically designed to test lateral movement. Track the blast radius of a compromised credential over time.
The organisations that get Zero Trust right are not the ones who bought the most Zero Trust products. They are the ones who understood what problem they were solving, were honest about where their legacy estate created gaps, and built their programme around reducing actual attacker capability rather than improving compliance posture metrics.
Filed under: Security Architecture