Skip to main content

Landing Zone Design and Setup

A landing zone is a pre-configured cloud environment that provides the foundational infrastructure, security controls, and governance framework upon which all organisational workloads deploy. The landing zone establishes account structure, identity federation, network topology, security baselines, and policy guardrails before any application or service deployment occurs. Every resource created within the cloud environment inherits the landing zone’s controls, making initial design decisions persistent and difficult to retrofit.

The landing zone concept addresses a fundamental challenge in cloud adoption: the ease with which cloud resources can be provisioned creates risk when that provisioning occurs without consistent security, networking, and governance foundations. A single engineer can create a virtual network, deploy a database, and expose it to the internet within minutes. The landing zone intervenes by establishing mandatory controls that apply regardless of who provisions resources or how quickly they work.

Landing Zone
A pre-configured cloud environment establishing account structure, identity, networking, security, and governance foundations. All workloads deploy within the landing zone’s boundaries and inherit its controls.
Account (AWS) / Subscription (Azure) / Project (GCP)
The primary billing and security boundary in each cloud platform. Landing zones organise these into hierarchies that separate workloads by environment, sensitivity, or organisational unit.
Management Account
The root account that owns the organisational hierarchy and contains centralised management functions. This account holds no workloads and has highly restricted access.
Guardrail
A policy control that prevents or detects actions that violate organisational standards. Preventive guardrails block non-compliant actions; detective guardrails alert on policy violations after they occur.
Service Control Policy (SCP)
AWS mechanism for defining permission boundaries across accounts. SCPs restrict what actions any principal can perform, regardless of their IAM permissions.
Hub-Spoke Topology
Network architecture where a central hub network provides shared services and connectivity, with spoke networks containing workloads connected to the hub.

Landing Zone Components

A complete landing zone comprises six integrated components that together create the foundation for secure, governed cloud operations. These components interact through inheritance hierarchies where policies and configurations flow from central management points to individual workloads.

+-------------------------------------------------------------------------+
| LANDING ZONE ARCHITECTURE |
+-------------------------------------------------------------------------+
| |
| +---------------------------+ +---------------------------+ |
| | IDENTITY FOUNDATION | | GOVERNANCE FRAMEWORK | |
| | | | | |
| | - IdP federation | | - Policy guardrails | |
| | - Role definitions | | - Tagging standards | |
| | - Permission boundaries | | - Budget controls | |
| | - Break-glass access | | - Compliance rules | |
| +-------------+-------------+ +-------------+-------------+ |
| | | |
| +----------------+----------------+ |
| | |
| +------------v-----------+ |
| | ACCOUNT HIERARCHY | |
| | | |
| | - Management account | |
| | - Security account | |
| | - Log archive account | |
| | - Shared services | |
| | - Workload accounts | |
| +------------+-----------+ |
| | |
| +-----------------------+-----------------------+ |
| | | | |
| +------v------+ +-------v-------+ +-------v-------+ |
| | NETWORK | | SECURITY | | OBSERVABILITY | |
| | FOUNDATION | | BASELINE | | FOUNDATION | |
| | | | | | | |
| | - Hub VNet | | - Encryption | | - Log routing | |
| | - DNS zones | | - Key mgmt | | - Audit trail | |
| | - Firewall | | - Hardening | | - Alerting | |
| | - Peering | | - Detection | | - Retention | |
| +-------------+ +---------------+ +---------------+ |
| |
+-------------------------------------------------------------------------+

The identity foundation federates authentication with the organisation’s identity provider, eliminating cloud-native user accounts in favour of centralised identity management. When a staff member authenticates to access cloud resources, they authenticate against the organisational directory, and the cloud platform receives assertions about their identity and group memberships. This federation means that when someone leaves the organisation and their directory account is disabled, their cloud access terminates immediately without requiring separate cloud-side deprovisioning.

The governance framework defines the policies and controls that restrict what actions can occur within the cloud environment. These controls operate at the account hierarchy level, meaning they apply regardless of individual user permissions. A governance policy preventing public storage buckets cannot be overridden by an administrator granting themselves full storage permissions; the policy operates at a higher enforcement level.

The account hierarchy organises cloud resources into separate billing and security boundaries. Each account operates in isolation from others by default, with explicit configuration required to enable cross-account access. This isolation contains blast radius: a compromised credential in one account cannot access resources in other accounts unless cross-account trust has been specifically configured.

The network foundation establishes the virtual network topology that workloads connect to. Rather than each workload team creating their own networks with their own addressing schemes and internet connectivity, the landing zone provides a consistent network architecture with centralised egress, DNS resolution, and connectivity to on-premises networks.

The security baseline configures encryption, key management, and detection capabilities that apply across all accounts. Encryption keys reside in a dedicated security account, and all workload accounts use these centralised keys rather than creating their own. Security detection services aggregate findings to a central security account for unified monitoring.

The observability foundation routes logs from all accounts to a centralised, immutable log archive. This archive uses write-once storage that prevents log deletion or modification, ensuring audit trail integrity even if an individual account is compromised.

Account Hierarchy Design

Account hierarchy design balances isolation requirements against management complexity. More accounts provide stronger isolation and clearer cost attribution but increase the operational overhead of managing cross-account access, networking, and deployment pipelines. Fewer accounts reduce complexity but increase blast radius and complicate cost allocation.

The simplest viable landing zone for a small organisation uses four accounts: management, security, log archive, and a single workload account. This structure provides essential separation between management functions and workloads while minimising operational complexity.

+-------------------------------------------------------------------+
| MINIMAL ACCOUNT HIERARCHY |
| (4 accounts, single workload) |
+-------------------------------------------------------------------+
| |
| +----------------------+ |
| | MANAGEMENT | |
| | (root account) | |
| | | |
| | - Org policies | |
| | - Billing | |
| | - SSO configuration | |
| +----------+-----------+ |
| | |
| +---------------------+---------------------+ |
| | | | |
| +------v------+ +-------v------+ +-------v------+ |
| | SECURITY | | LOG ARCHIVE | | WORKLOAD | |
| | | | | | | |
| | - GuardDuty | | - CloudTrail | | - All apps | |
| | - KMS keys | | - VPC flows | | - All envs | |
| | - Config | | - App logs | | - Networking | |
| | - Hub VNet | | - Immutable | | | |
| +-------------+ +--------------+ +--------------+ |
| |
+-------------------------------------------------------------------+

This minimal structure suits organisations with a single IT person or very limited cloud workloads. All applications deploy to the single workload account, with environments (development, staging, production) separated by resource tagging or naming conventions rather than account boundaries. The security account hosts the network hub, centralised encryption keys, and security monitoring services. The log archive account receives logs from all other accounts and permits no modifications after write.

As organisations grow or deploy more sensitive workloads, account separation by environment becomes valuable. Separating production from non-production environments at the account level means a misconfigured deployment pipeline that accidentally deletes all resources in development cannot affect production. Environment separation also enables different permission levels: developers might have broad access in development accounts but read-only access in production.

+-------------------------------------------------------------------------+
| ENVIRONMENT-SEPARATED HIERARCHY |
| (6 accounts, environment isolation) |
+-------------------------------------------------------------------------+
| |
| +----------------------+ |
| | MANAGEMENT | |
| +----------+-----------+ |
| | |
| +------------+------------+------------+------------+ |
| | | | | | |
| +------v------+ +---v----+ +-----v-----+ +----v----+ +-----v-----+ |
| | SECURITY | | LOG | | SHARED | | NON- | | PRODUCTION| |
| | | | ARCHIVE| | SERVICES | | PROD | | | |
| | - Security | | | | | | | | | |
| | tools | | - Logs | | - Hub VNet| | - Dev | | - Live | |
| | - Keys | | - Audit| | - DNS | | - Test | | apps | |
| | - Detection | | | | - Shared | | - Stage | | | |
| +-------------+ +--------+ | DBs | +---------+ +-----------+ |
| +-----------+ |
| |
+-------------------------------------------------------------------------+

The addition of a shared services account provides a location for resources that serve both environments: centralised DNS zones, shared databases for reference data, container registries, and artifact repositories. The network hub moves from the security account to shared services in this model, as it handles connectivity rather than security monitoring.

Larger organisations or those with compliance requirements requiring strict workload separation expand to per-application or per-team account structures. Each team or application receives dedicated accounts for each environment, enabling independent deployment cycles and clear cost attribution.

+-------------------------------------------------------------------------+
| TEAM-BASED ACCOUNT HIERARCHY |
| (12+ accounts, team isolation) |
+-------------------------------------------------------------------------+
| |
| +----------------------+ |
| | MANAGEMENT | |
| +----------+-----------+ |
| | |
| +----------------+----------+----------+----------------+ |
| | | | | |
| +----v----+ +-----v-----+ +------v------+ +-----v-----+ |
| |SECURITY | |LOG ARCHIVE| | SHARED | | NETWORK | |
| +---------+ +-----------+ | SERVICES | | HUB | |
| +-------------+ +-----------+ |
| |
| +------------------------------------------------------------------+ |
| | WORKLOAD ACCOUNTS | |
| | | |
| | +-----------------+ +-----------------+ +-----------------+ | |
| | | TEAM A | | TEAM B | | TEAM C | | |
| | | | | | | | | |
| | | +----+ +----+ | | +----+ +----+ | | +----+ +----+ | | |
| | | |Dev | |Prod| | | |Dev | |Prod| | | |Dev | |Prod| | | |
| | | +----+ +----+ | | +----+ +----+ | | +----+ +----+ | | |
| | +-----------------+ +-----------------+ +-----------------+ | |
| +------------------------------------------------------------------+ |
| |
+-------------------------------------------------------------------------+

The network hub separates into its own account in larger deployments because network configuration changes (firewall rules, routing tables, VPN connections) require different access controls than security monitoring activities. This separation enables the network team to manage connectivity without having access to security detection findings, and vice versa.

Account creation follows a vending machine model in mature landing zones: teams request new accounts through a self-service portal or ticket, and automation provisions the account with standardised configuration, network connectivity, and baseline permissions. Manual account creation becomes a bottleneck at scale and introduces configuration drift.

Identity Foundation

The identity foundation integrates cloud access with the organisation’s identity provider, ensuring that cloud authentication flows through the same systems that govern email, file storage, and other organisational resources. This integration eliminates the need to manage separate cloud-native user accounts and ensures that identity lifecycle events (joining, role changes, departures) automatically affect cloud access.

Federation operates through SAML or OIDC protocols. When a user attempts to access the cloud console or CLI, the cloud platform redirects them to the organisational identity provider. After successful authentication (including any required multi-factor authentication), the identity provider returns an assertion containing the user’s identity and group memberships. The cloud platform maps these group memberships to permission sets that define what the user can do within specific accounts.

+------------------------------------------------------------------------+
| IDENTITY FEDERATION FLOW |
+------------------------------------------------------------------------+
| |
| User Cloud IdP Directory |
| | Console (SAML/OIDC) (AD/LDAP) |
| | | | | |
| |---(1) Access------>| | | |
| | | | | |
| |<--(2) Redirect-----| | | |
| | | | | |
| |---(3) Authenticate----------------->| | |
| | | | | |
| | | |---(4) Verify----->| |
| | | | | |
| | | |<--(5) Groups------| |
| | | | | |
| |<--(6) MFA Challenge-----------------| | |
| | | | | |
| |---(7) MFA Response----------------->| | |
| | | | | |
| |<--(8) SAML Assertion----------------| | |
| | | | | |
| |---(9) Assertion--->| | | |
| | | | | |
| | |---(10) Map groups to permission sets |
| | | | | |
| |<--(11) Console with assumed role----| | |
| | | | | |
+------------------------------------------------------------------------+

Permission sets define bundles of permissions that users receive when accessing specific accounts. A permission set named “DeveloperAccess” might grant permissions to manage compute instances, read from storage, and deploy applications, while a permission set named “ReadOnly” permits viewing resources without modification. Users receive permission sets for specific accounts based on their group memberships: the “team-a-developers” directory group might map to the DeveloperAccess permission set in Team A’s development account and ReadOnly in Team A’s production account.

A worked example illustrates the mapping. Consider an organisation with three directory groups relevant to cloud access:

  • cloud-admins (5 members): Platform team responsible for landing zone management
  • team-a-developers (8 members): Developers working on Team A applications
  • team-a-readonly (15 members): Staff who need visibility into Team A systems

The permission set mappings configure:

Directory GroupAccountPermission Set
cloud-adminsManagementAdministratorAccess
cloud-adminsSecurityAdministratorAccess
cloud-adminsAll workload accountsAdministratorAccess
team-a-developersteam-a-devDeveloperAccess
team-a-developersteam-a-prodReadOnly
team-a-readonlyteam-a-devReadOnly
team-a-readonlyteam-a-prodReadOnly

This mapping ensures that Team A developers can deploy and modify resources in development but can only view production. The platform team retains administrative access everywhere for break-glass situations, though their day-to-day work should use lower-privileged access appropriate to specific tasks.

Break-glass access provides emergency administrative capability when normal federation is unavailable. If the identity provider experiences an outage, no one can authenticate to the cloud environment through normal channels. Break-glass accounts are cloud-native accounts (not federated) with strong credentials stored in a secure vault. These accounts should never be used during normal operations; their use triggers alerts and requires documented justification. Monthly testing of break-glass access ensures the credentials work when genuinely needed.

Network Foundation

The network foundation establishes connectivity patterns that workloads inherit rather than each workload creating independent network configurations. Centralised network architecture ensures consistent security controls, predictable IP addressing, and managed connectivity to on-premises networks and the internet.

The hub-spoke topology provides the standard pattern for landing zone networks. A central hub virtual network hosts shared network services: firewalls, DNS resolvers, VPN termination, and connections to on-premises networks. Spoke networks containing workloads connect to the hub through peering relationships that enable traffic flow while maintaining isolation between spokes.

+-------------------------------------------------------------------------+
| HUB-SPOKE NETWORK TOPOLOGY |
+-------------------------------------------------------------------------+
| |
| +--------------------+ |
| | ON-PREMISES | |
| | NETWORK | |
| +----------+---------+ |
| | |
| VPN / ExpressRoute |
| | |
| +-----------------------------------------------------------------+ |
| | HUB VNET | |
| | 10.0.0.0/16 | |
| | | |
| | +-------------+ +-------------+ +-------------+ | |
| | | Firewall | | DNS | | Gateway | | |
| | | 10.0.1.0/24 | | 10.0.2.0/24 | | 10.0.0.0/24 | | |
| | +------+------+ +------+------+ +------+------+ | |
| | | | | | |
| +---------+----------------+----------------+---------------------+ |
| | | | |
| | +-----------+-----------+ | |
| | | | | | |
| +------+----+--+ +-----+-----+ +--+----+------+ |
| | | | | | | |
| | SPOKE: PROD | |SPOKE: DEV | |SPOKE: SHARED | |
| | 10.1.0.0/16 | |10.2.0.0/16| | 10.3.0.0/16 | |
| | | | | | | |
| | +----------+ | |+--------+ | | +----------+ | |
| | |Web Tier | | ||Apps | | | |Container | | |
| | |10.1.1/24 | | ||10.2.1/ | | | |Registry | | |
| | +----------+ | || 24 | | | +----------+ | |
| | +----------+ | |+--------+ | | +----------+ | |
| | |App Tier | | | | | |Artifact | | |
| | |10.1.2/24 | | | | | |Repo | | |
| | +----------+ | | | | +----------+ | |
| | +----------+ | | | | | |
| | |Data Tier | | | | | | |
| | |10.1.3/24 | | | | | | |
| | +----------+ | | | | | |
| +--------------+ +-----------+ +--------------+ |
| |
+-------------------------------------------------------------------------+

Traffic between spokes flows through the hub’s firewall rather than directly between spokes. This forced routing enables centralised inspection and logging of all inter-workload communication. The firewall maintains rules permitting only expected traffic patterns; a compromised development workload cannot reach production databases because no firewall rule permits that traffic flow.

IP addressing follows a planned schema that accommodates growth without renumbering. A common approach reserves a /8 or /12 private range for cloud use and subdivides systematically:

  • 10.0.0.0/16: Hub network (shared services, security appliances)
  • 10.1.0.0/16: Production workload spoke
  • 10.2.0.0/16: Non-production workload spoke
  • 10.3.0.0/16: Shared services spoke
  • 10.4.0.0/16 onwards: Additional workload spokes as needed

Each /16 provides 65,534 usable addresses, sufficient for substantial workloads. Within each spoke, /24 subnets (254 usable addresses each) separate tiers or functions. This scheme supports 256 subnets per spoke and 256 spokes before exhausting the 10.0.0.0/8 range.

DNS resolution integrates cloud-hosted zones with on-premises DNS. The hub network hosts DNS resolvers that handle queries for both cloud-hosted zones (resolving internal hostnames to private IP addresses) and forward queries for other domains to on-premises DNS or public resolvers. Workloads configure these hub-based resolvers rather than using cloud-default DNS, ensuring consistent resolution regardless of where a workload deploys.

Security Baseline

The security baseline configures controls that apply across all accounts regardless of workload type. These controls address encryption, detection, and hardening requirements that every deployment should inherit without requiring individual workload teams to implement them.

Encryption configuration centralises key management in the security account. A key hierarchy provides separation between key management and key use:

+-------------------------------------------------------------------------+
| KEY MANAGEMENT HIERARCHY |
+-------------------------------------------------------------------------+
| |
| SECURITY ACCOUNT |
| +-----------------------------------------------------------------+ |
| | +---------------------------+ | |
| | | ROOT KEY | AWS-managed, automatic rotation | |
| | | (AWS/Azure-managed) | Never exported, never used | |
| | +-------------+-------------+ directly | |
| | | | |
| | +--------+--------+ | |
| | | | | |
| | +----v----+ +----v----+ | |
| | |Workload | |Log/Audit| Separate keys for different | |
| | |Data Key | | Key | purposes | |
| | +---------+ +---------+ | |
| +-----------------------------------------------------------------+ |
| |
| WORKLOAD ACCOUNTS |
| +-----------------------------------------------------------------+ |
| | | |
| | Storage, databases, and other resources use keys from | |
| | security account via cross-account grants | |
| | | |
| | +------------+ +------------+ +------------+ | |
| | | Storage | | Database | | Secrets | | |
| | | encrypted | | encrypted | | encrypted | | |
| | | with | | with | | with | | |
| | | workload | | workload | | workload | | |
| | | data key | | data key | | data key | | |
| | +------------+ +------------+ +------------+ | |
| +-----------------------------------------------------------------+ |
| |
+-------------------------------------------------------------------------+

Key policies in the security account grant workload accounts permission to use (but not manage or delete) the encryption keys. This separation means that even a fully compromised workload account cannot delete or modify the encryption keys protecting its data; those operations require access to the security account.

Default encryption policies enforce encryption without requiring workload teams to configure it. Storage services reject unencrypted uploads. Database services require encryption at rest as a creation-time setting that cannot be disabled. Network traffic between services uses TLS 1.2 or higher.

Security detection services aggregate across all accounts to the security account. Services that detect threats, misconfigurations, or policy violations feed findings to a central location where the security team monitors and responds. The specific services vary by cloud platform, but the pattern remains consistent: enable in all accounts, aggregate to one account, alert on findings.

Detection service coverage

Detection services should cover: API activity anomalies, network traffic patterns, malware in storage, configuration drift from baselines, public exposure of private resources, and credential misuse patterns. Platform-native services address most of these; gaps require third-party tooling or custom detection rules.

Hardening baselines define secure default configurations. Compute instances launch with current patches, unnecessary services disabled, and host-based firewalls configured. Container images derive from organisation-approved base images scanned for vulnerabilities. Storage services default to private access with explicit grants required for any cross-account or public access.

Governance and Policy

Governance controls restrict what actions can occur within the cloud environment. Unlike permissions, which grant users ability to perform actions, governance controls define boundaries that no permission grant can override. An administrator with full permissions in an account still cannot perform actions that governance policies prohibit.

Service control policies (AWS term; similar concepts exist in other platforms) define what API actions are permissible within an organisational unit or account. A policy denying the ability to create unencrypted storage buckets takes effect regardless of the user’s permissions. If their IAM policy grants full storage permissions but the service control policy denies unencrypted bucket creation, the action fails.

Effective service control policies for landing zones address:

Policy CategoryExample Controls
Data protectionDeny unencrypted storage creation, deny public bucket policies, deny unencrypted database creation
Region restrictionDeny resource creation outside approved regions (data sovereignty)
Network protectionDeny internet gateway creation in workload accounts, deny VPC creation without approval
Service restrictionDeny use of non-approved services (reduce attack surface)
Identity protectionDeny creation of IAM users (require federation), deny long-term access keys

Tagging policies enforce consistent metadata on resources. Tags enable cost allocation, ownership identification, and automated operations. A tagging policy requiring “CostCentre”, “Environment”, and “Owner” tags on all resources ensures that cost reports can break down spending by team, untagged resources can be identified for remediation, and resource owners can be contacted when issues arise.

A worked example demonstrates tagging policy structure. The organisation requires four tags:

Tag KeyAllowed ValuesPurpose
Environmentproduction, staging, developmentCost allocation, access control
CostCentreCC001-CC999 (finance-defined)Budget allocation
OwnerValid email addressContact for issues
ApplicationFree textService identification

Resources lacking required tags cannot be created in strict enforcement mode, or are flagged for remediation in audit mode. The choice between strict and audit modes depends on organisational readiness; strict mode during migration or rapid development creates friction, while audit mode during steady-state operation identifies compliance gaps without blocking work.

Budget controls set spending thresholds at the account and organisational unit level. A budget for the development workload account might alert at 80% of the £5,000 monthly allocation and trigger automated shutdowns of non-essential resources at 100%. Production accounts might have higher thresholds and alert-only responses to avoid availability impact.

Observability Foundation

The observability foundation routes logs, metrics, and audit events from all accounts to centralised storage and analysis. Centralisation serves three purposes: it aggregates visibility across the environment, it protects audit trails from tampering by storing them outside the accounts they monitor, and it provides a single location for security investigation.

Log routing collects multiple log types from each account:

+-------------------------------------------------------------------------+
| LOG AGGREGATION ARCHITECTURE |
+-------------------------------------------------------------------------+
| |
| WORKLOAD ACCOUNTS LOG ARCHIVE ACCOUNT |
| +-------------------------------+ +-------------------------+ |
| | | | | |
| | +-------+ API calls | | +-------------------+ | |
| | |Cloud +---------------------|------->| | Immutable Storage | | |
| | |Trail | | | | | | |
| | +-------+ | | | - Write-once | | |
| | | | | - No delete | | |
| | +-------+ Network flows | | | - 7-year retain | | |
| | | VPC +---------------------|------->| | | | |
| | | Flows | | | | +---------+ | | |
| | +-------+ | | | |CloudTrl | | | |
| | | | | +---------+ | | |
| | +-------+ Application logs | | | +---------+ | | |
| | | App +---------------------|------->| | |VPCFlows | | | |
| | | Logs | | | | +---------+ | | |
| | +-------+ | | | +---------+ | | |
| | | | | |AppLogs | | | |
| +-------------------------------+ | | +---------+ | | |
| | +-------------------+ | |
| SECURITY ACCOUNT | | |
| +-------------------------------+ | +-------------------+ | |
| | | | | Analysis Layer | | |
| | +-------+ Security findings | | | | | |
| | |Detect +---------------------|------->| | - Query access | | |
| | |Svc | | | | - SIEM ingest | | |
| | +-------+ | | | - Alerting | | |
| | | | +-------------------+ | |
| +-------------------------------+ +-------------------------+ |
| |
+-------------------------------------------------------------------------+

The log archive account stores logs in immutable storage configured with retention locks. Once written, logs cannot be deleted or modified until the retention period expires. This immutability ensures that a compromised workload account cannot cover its tracks by deleting logs; the logs reside in a separate account that the compromised credentials cannot access.

Retention periods balance compliance requirements against storage costs. A common baseline:

Log TypeRetention PeriodRationale
API audit logs7 yearsRegulatory compliance, forensic investigation
Network flow logs90 daysSecurity investigation, troubleshooting
Application logs30-90 daysDebugging, performance analysis
Security findings1 yearTrend analysis, compliance reporting

Storage costs for audit logs scale with activity volume. A small organisation generating 10GB of CloudTrail logs monthly incurs approximately £2-3/month for standard storage plus £0.50-1/month for the immutable storage premium. These costs increase linearly with organisational size and cloud activity intensity.

Alert routing ensures that critical findings reach responders promptly. Security findings above configurable severity thresholds trigger notifications to the security team. Cost anomalies trigger notifications to finance and platform teams. Availability alerts route to operations. The landing zone establishes the routing infrastructure; specific alert definitions develop as workloads deploy and operational patterns emerge.

Multi-Region Considerations

Multi-region deployment adds complexity but addresses data residency requirements, disaster recovery needs, and user proximity for latency-sensitive applications. The landing zone architecture extends to multiple regions while maintaining centralised governance.

+-------------------------------------------------------------------------+
| MULTI-REGION LANDING ZONE |
+-------------------------------------------------------------------------+
| |
| +----------------------+ |
| | MANAGEMENT | |
| | (Global) | |
| +----------+-----------+ |
| | |
| +------------------+------------------+ |
| | | | |
| +------v------+ +------v------+ +-----v-------+ |
| | SECURITY | | LOG ARCHIVE| | IDENTITY | |
| | (Global) | | (Global) | | (Global) | |
| +-------------+ +-------------+ +-------------+ |
| |
| +--------------------------------+ +--------------------------------+ |
| | EU-WEST-1 | | EU-NORTH-1 | |
| | | | | |
| | +----------+ +-----------+ | | +----------+ +-----------+ | |
| | | HUB | | WORKLOAD | | | | HUB | | WORKLOAD | | |
| | | VNET | | SPOKE | | | | VNET | | SPOKE | | |
| | +----+-----+ +-----+-----+ | | +----+-----+ +-----+-----+ | |
| | | | | | | | | |
| | +------+-------+ | | +------+-------+ | |
| | | | | | | |
| +--------------+-----------------+ +--------------+-----------------+ |
| | | |
| +----------------+------------------+ |
| | |
| Inter-region peering / Transit |
| |
+-------------------------------------------------------------------------+

Management, security, and logging functions remain global services that span all regions. Each region hosts its own network hub and workload spokes. Inter-region connectivity enables workloads in one region to communicate with workloads in another when required, though most traffic should remain regional.

Data residency requirements often drive multi-region deployment. Personal data of EU residents might require processing within EU regions, while operational data for staff in other locations might use regions closer to them. The landing zone enables regional workload deployment while maintaining global governance and visibility.

Implementation Considerations

Organisations with minimal IT capacity

A single IT person implementing cloud infrastructure should prioritise immediate functionality over architectural elegance. The minimal four-account hierarchy provides essential security separation without overwhelming management complexity.

Begin with platform-provided landing zone tooling where available. AWS Control Tower, Azure Landing Zones, and Google Cloud Foundation Toolkit provide automated deployment of baseline configurations. These tools make opinionated choices that might not match every requirement but provide secure defaults that require less expertise to maintain than custom implementations.

Accept temporary limitations that can evolve later. A single workload account containing all applications and environments functions adequately for small deployments. As growth occurs or compliance requirements emerge, workload separation can proceed incrementally without redesigning the core landing zone.

Estimated implementation timeline: 2-4 weeks for initial deployment using platform automation, with ongoing refinement as workloads deploy.

Organisations with established IT capacity

Dedicated cloud engineering capacity enables custom landing zone design matching organisational requirements rather than accepting platform defaults. Custom implementations provide flexibility in account structure, policy design, and integration patterns.

Infrastructure as Code (IaC) should define all landing zone components. Terraform, Pulumi, or platform-native tools (CloudFormation, ARM templates) capture the complete configuration in version-controlled repositories. Changes proceed through pull request review and automated testing rather than manual console operations.

Account vending automation eliminates the bottleneck of manual account creation. Teams request accounts through self-service interfaces, and automation provisions the account with standardised configuration within minutes. The provisioned account connects to the network hub, receives baseline policies, and appears in monitoring systems without manual intervention.

Estimated implementation timeline: 2-3 months for custom landing zone development, including IaC framework selection, policy definition, automation development, and testing.

High-security and compliance contexts

Environments processing sensitive data require additional controls beyond baseline landing zone architecture. Protection data, financial information, and health records demand encryption with customer-managed keys, network isolation preventing any internet egress, and audit logging capturing all data access.

Dedicated accounts for sensitive workloads provide additional blast radius containment. These accounts receive stricter service control policies, more aggressive alerting thresholds, and potentially different retention periods for compliance. Network architecture might isolate these workloads on separate spokes with no peering to less sensitive environments.

Third-party audit requirements influence landing zone design. SOC 2, ISO 27001, or sector-specific frameworks require demonstrable controls. Landing zone documentation should map controls to framework requirements, enabling auditors to verify implementation without extensive technical investigation.

Field-connected contexts

Organisations with field offices connecting to cloud services face latency and reliability challenges. Landing zone network architecture should account for these constraints.

Regional deployment reduces latency for field users. If field offices in East Africa access cloud resources in EU West, each request traverses 100-200ms of network latency. Deploying workloads in regions closer to users (where available and compliant with data residency requirements) improves experience substantially.

Caching and edge services reduce repeated requests to origin infrastructure. Static content, API responses that change infrequently, and authentication tokens can cache at edge locations closer to users. The landing zone network foundation should accommodate edge service integration.

Offline-capable application architectures reduce dependence on constant connectivity. While the landing zone itself cannot make applications offline-capable, network architecture should support the eventual consistency patterns that offline applications require for synchronisation.

See also