SD-WAN
Software-defined wide area networking (SD-WAN) decouples the network control plane from the underlying transport infrastructure, creating an overlay network that routes traffic based on application requirements rather than static path configurations. The technology abstracts physical circuits into a unified logical network where a central controller makes routing decisions according to policies, real-time path quality measurements, and application identification.
Traditional WAN architectures bind organisations to expensive MPLS circuits with fixed bandwidth allocations. A branch office requiring 50 Mbps of bandwidth receives that capacity regardless of whether current demand is 5 Mbps or 50 Mbps, and traffic follows predetermined paths even when those paths experience degradation. SD-WAN transforms this model by treating all available transport as a resource pool. A branch office with a 100 Mbps fibre connection, a 50 Mbps LTE backup, and a 20 Mbps satellite link presents 170 Mbps of aggregate capacity to the SD-WAN fabric, with traffic distributed across paths according to application needs and current conditions.
- Overlay network
- A logical network constructed on top of one or more physical networks (underlays). SD-WAN creates encrypted tunnels between endpoints across any IP-capable transport.
- Underlay network
- The physical transport infrastructure carrying overlay traffic. Examples include MPLS circuits, broadband internet, LTE/5G mobile connections, and satellite links.
- Control plane
- The system component responsible for making routing decisions. In SD-WAN, a centralised controller determines how traffic should flow; edge devices execute those decisions.
- Data plane
- The system component responsible for forwarding packets. SD-WAN edge devices inspect traffic, apply policies, and transmit packets across selected paths.
- Application-aware routing
- Traffic steering based on identified application type rather than destination address alone. Video conferencing traffic routes differently from file transfers even when destined for the same endpoint.
- Path quality metrics
- Measurements of transport characteristics including latency, jitter, packet loss, and available bandwidth. SD-WAN uses these metrics to select optimal paths for each traffic class.
Overlay architecture
SD-WAN operates through encrypted tunnels that span multiple transport connections, creating a mesh of virtual paths between sites. Each site runs an edge device that terminates these tunnels and participates in the overlay fabric. The edge device maintains multiple underlay connections and dynamically selects which physical path carries each packet based on policy and real-time measurements.
+------------------------------------------------------------------------+| SD-WAN OVERLAY FABRIC |+------------------------------------------------------------------------+| || Headquarters Regional Hub || +------------------+ +------------------+ || | SD-WAN Edge | | SD-WAN Edge | || | |<======================>| | || | +-----------+ | Encrypted Overlay | +-----------+ | || | | Policy | | Tunnels (IPsec) | | Policy | | || | | Engine | | | | Engine | | || | +-----------+ | | +-----------+ | || +--------+---------+ +--------+---------+ || | | || +--------+---------+ +--------+---------+ || | Underlay | | Underlay | || | Connections | | Connections | || | | | | || | [MPLS 100Mbps] | | [Fibre 200Mbps] | || | [Fibre 500Mbps] | | [LTE 75Mbps] | || | [LTE 100Mbps] | | [VSAT 10Mbps] | || +------------------+ +------------------+ || || +------------------+ || | Field Office | || | SD-WAN Edge | || | | || | [LTE-A 50Mbps] | || | [LTE-B 30Mbps] | || | [VSAT 5Mbps] | || +------------------+ || |+------------------------------------------------------------------------+ | +---------------+---------------+ | | +---------v---------+ +---------v---------+ | SD-WAN | | Orchestration | | Controller | | Platform | | | | | | - Path selection | | - Configuration | | - Policy distrib. | | - Monitoring | | - Key management | | - Reporting | +-------------------+ +-------------------+Figure 1: SD-WAN overlay architecture showing edge devices, underlay connections, and centralised management
The controller distributes routing policies and cryptographic keys to edge devices but does not sit in the data path. Traffic flows directly between edge devices through the overlay tunnels; the controller intervenes only when policies change or when edges require updated path information. This architecture ensures that controller unavailability does not disrupt existing traffic flows, though new policy deployments require controller connectivity.
Each edge device builds tunnels to other edges across all available underlay connections. A site with three WAN links establishes three parallel tunnels to each peer site, enabling packet-level path selection. The edge continuously measures path quality by exchanging probe packets with peers, maintaining current latency, jitter, and loss statistics for each tunnel. When application traffic arrives, the edge consults its policy table, identifies the traffic class, and selects the tunnel meeting that class’s requirements.
The overlay abstracts transport heterogeneity from applications and users. A user at a field office experiences consistent connectivity regardless of whether their packets traverse the primary LTE connection, the backup LTE from a different carrier, or the satellite link. The SD-WAN fabric handles failover, load distribution, and path optimisation transparently.
Traffic steering mechanisms
SD-WAN identifies applications through multiple techniques operating in sequence. Deep packet inspection (DPI) examines packet payloads and protocol behaviours to classify traffic. The edge device recognises Microsoft Teams video by its characteristic signalling patterns, distinguishes Salesforce API calls from general HTTPS traffic by TLS certificate inspection, and identifies file transfer protocols regardless of port number.
When encryption prevents payload inspection, SD-WAN uses flow heuristics derived from packet timing, size distribution, and destination behaviour. Video conferencing produces consistent packet sizes at regular intervals; bulk transfers create bursts of maximum-size packets. These patterns enable classification even for encrypted traffic that reveals no application-layer information.
First-packet identification determines application type within the initial packets of a connection, enabling immediate policy application. Traditional QoS systems that classify based on established flow patterns introduce latency before proper handling begins. SD-WAN’s first-packet capability routes the opening handshake of a video call across the low-latency path rather than waiting for sufficient traffic to identify the application.
Traffic steering policies bind application classes to path requirements:
+-------------------------------------------------------------------------+| TRAFFIC STEERING POLICY MODEL |+-------------------------------------------------------------------------+| || +------------------------+ || | Incoming Packet | || +------------------------+ || | || v || +------------------------+ || | Application | || | Identification | || | (DPI + Heuristics) | || +------------------------+ || | || v || +------------------------+ +------------------------------------+ || | Policy Lookup |---->| Application Class Requirements | || | | | | || | App: MS Teams Voice | | Class: Real-time Voice | || | Class: Real-time | | Max Latency: 150ms | || +------------------------+ | Max Jitter: 30ms | || | | Max Loss: 1% | || v | Priority: Critical | || +------------------------+ +------------------------------------+ || | Path Selection | || | | || | Available Paths: | Path Metrics (current): || | | +----------------------------------+ || | Path A: MPLS |---->| Latency: 25ms Jitter: 5ms | || | [SELECTED] | | Loss: 0.1% BW Avail: 45Mbps | || | | +----------------------------------+ || | Path B: Broadband |---->| Latency: 45ms Jitter: 12ms | || | | | Loss: 0.3% BW Avail: 180Mbps | || | | +----------------------------------+ || | Path C: LTE |---->| Latency: 65ms Jitter: 25ms | || | | | Loss: 0.8% BW Avail: 35Mbps | || +------------------------+ +----------------------------------+ || | || v || +------------------------+ || | Packet Forwarding | || | via Path A Tunnel | || +------------------------+ || |+-------------------------------------------------------------------------+Figure 2: Traffic steering decision flow from identification through path selection
The path selection algorithm evaluates current metrics against class requirements. For the real-time voice class requiring under 150ms latency and under 30ms jitter, Path A (MPLS) and Path B (broadband) both qualify. The algorithm selects Path A because it provides the largest margin against requirements, reserving Path B capacity for traffic classes that need bandwidth over latency performance.
When no single path meets requirements, SD-WAN can distribute a flow across multiple paths using packet-level load balancing. Unlike traditional per-flow load balancing that pins an entire connection to one path, packet-level distribution sends individual packets across different tunnels. The receiving edge resequences packets before delivery to the destination, masking the multi-path transport from applications. This technique aggregates bandwidth from paths that individually lack sufficient capacity: a 100 Mbps transfer can complete in 10 seconds using two 50 Mbps paths in parallel rather than 20 seconds on a single path.
Forward error correction (FEC) provides another mechanism for paths with packet loss. The edge device adds redundant packets to critical flows, enabling the receiving edge to reconstruct lost packets without retransmission. A flow with 5% FEC overhead can tolerate 5% packet loss with no impact on the application. FEC trades bandwidth for reliability, making it suitable for real-time traffic that cannot wait for TCP retransmission but inappropriate for bulk transfers where retransmission costs less than continuous overhead.
Link aggregation and failover
SD-WAN aggregates bandwidth through simultaneous use of multiple paths, distinguishing it from traditional active-passive failover designs. The aggregate capacity equals the sum of all usable paths minus overhead for encryption and protocol headers, approximately 5-10% of raw bandwidth.
Failover occurs automatically when path monitoring detects degradation. The edge device sends probe packets at configurable intervals, with 1-second probes being common for critical links. When probes fail or return metrics exceeding thresholds, the edge marks that path unavailable and redistributes traffic to remaining paths. Failover time depends on probe interval and failure detection threshold: with 1-second probes and a 3-probe failure threshold, detection occurs within 3 seconds. Traffic redistribution adds another 1-2 seconds as the edge updates forwarding tables and adjusts quality parameters for remaining paths.
+-------------------------------------------------------------------------+| MULTI-LINK FAILOVER TOPOLOGY |+-------------------------------------------------------------------------+| || Field Office Regional Hub || +---------------------------+ +------------------+ || | SD-WAN Edge | | SD-WAN Edge | || | | | | || | Active Tunnels: | | | || | ========================>|==== Tunnel A =====>| | || | [Primary: Fibre 100Mbps] | (Fibre) | | || | | | | || | ========================>|==== Tunnel B =====>| | || | [Secondary: LTE 50Mbps] | (LTE) | | || | | | | || | ========================>|==== Tunnel C =====>| | || | [Tertiary: VSAT 10Mbps] | (VSAT) | | || | | | | || +---------------------------+ +------------------+ || || Normal Operation (all links healthy): || +-------------------------------------------------------------------+ || | Traffic Class | Path | Bandwidth Used | Latency | || |--------------------|-------------|----------------|---------------| || | Voice/Video | Fibre | 15 Mbps | 25ms | || | Business Apps | Fibre + LTE | 80 Mbps | 25-45ms | || | Bulk Transfer | LTE | 40 Mbps | 45ms | || | Background Sync | VSAT | 8 Mbps | 600ms | || +-------------------------------------------------------------------+ || || Fibre Failure (detected at T+3 seconds): || +-------------------------------------------------------------------+ || | Traffic Class | Path | Bandwidth Used | Latency | || |--------------------|-------------|----------------|---------------| || | Voice/Video | LTE | 15 Mbps | 45ms | || | Business Apps | LTE | 35 Mbps | 45ms | || | Bulk Transfer | SUSPENDED | 0 Mbps | n/a | || | Background Sync | VSAT | 8 Mbps | 600ms | || +-------------------------------------------------------------------+ || || Note: Bulk transfers suspended to preserve LTE capacity for || higher-priority traffic. Resumes when fibre recovers or if LTE || capacity exceeds critical traffic demand. || |+-------------------------------------------------------------------------+Figure 3: Multi-link failover showing traffic redistribution during primary link failure
Brownout detection identifies degraded links that remain technically connected but perform poorly. A fibre connection experiencing congestion might deliver packets but with 200ms latency instead of the normal 25ms. Simple up/down monitoring misses this condition. SD-WAN’s continuous path quality measurement detects brownouts and steers traffic away from degraded paths before application impact occurs.
Failback behaviour requires careful configuration. Aggressive failback causes traffic to immediately return to a recovered link, risking instability if the link flaps. Conservative failback waits for sustained recovery before restoring traffic, providing stability but delaying return to optimal performance. A common configuration requires 60 seconds of healthy probe responses before failback, with gradual traffic migration over another 30 seconds rather than immediate switchover.
Security integration
SD-WAN encrypts all overlay traffic using IPsec or proprietary tunnel protocols. Each tunnel between edge devices uses unique session keys derived from a key exchange authenticated by the controller. Key rotation occurs at configurable intervals, with 24-hour rotation being common for general traffic and 1-hour rotation for high-sensitivity environments.
The security model assumes hostile underlay networks. Traffic traversing public internet receives the same encryption as traffic crossing private MPLS circuits, eliminating security distinctions between transport types. An attacker observing underlay traffic sees encrypted packets with no visibility into application data, source/destination relationships beyond the edge device IP addresses, or traffic patterns within the overlay.
Integrated firewall capabilities in SD-WAN edges provide zone-based security without requiring separate firewall appliances at each site. The edge inspects traffic between network segments, applying policies that permit finance applications to reach headquarters while blocking direct internet access for point-of-sale systems. These policies distribute from the central controller, ensuring consistent enforcement across all sites.
Next-generation firewall functions including intrusion prevention, URL filtering, and malware scanning exist in higher-tier SD-WAN products. Organisations with existing centralised security infrastructure can alternatively backhaul inspection-required traffic through a security hub rather than processing at the edge. The choice depends on bandwidth economics: local processing reduces backhaul volume while centralised processing consolidates security tooling and expertise.
Segmentation within the overlay creates isolated virtual networks sharing physical infrastructure. A humanitarian organisation might segment programme data networks from administrative systems, ensuring field-collected beneficiary information never traverses the same logical network as HR and finance traffic. The SD-WAN fabric enforces segmentation end-to-end; traffic cannot cross segment boundaries except through explicitly defined inspection points.
Integration with identity-aware security enables user-based policies beyond network-based rules. When combined with identity providers, SD-WAN can route a finance user’s traffic through compliance inspection paths while allowing IT administrators direct cloud access. This integration bridges network-layer SD-WAN with application-layer Zero Trust Network Access controls.
Cloud connectivity
Cloud on-ramp capabilities optimise paths to cloud service providers by establishing direct connections at internet exchange points or cloud provider edge locations. Rather than routing cloud-destined traffic through headquarters for internet breakout, SD-WAN directs this traffic from branch edges to nearby cloud entry points.
+-------------------------------------------------------------------------+| CLOUD ON-RAMP ARCHITECTURE |+-------------------------------------------------------------------------+| || Cloud Providers || +-------------+ +-------------+ || | Microsoft | | AWS | || | 365 | | Services | || +------+------+ +------+------+ || | | || +--------------+----------------+-------------+ || | | | | || | +--------+--------+ | | || | | SD-WAN Cloud | | | || | | Gateway | | | || | | (IXP Location) | | | || | +--------+--------+ | | || | | | | || | | Direct | | || | | Peering | | || | | | | || +---------+----+ +------+-------+ +-----+------+ +---+----------+ || | HQ | | Regional Hub | | Branch A | | Branch B | || | SD-WAN Edge | | SD-WAN Edge | | SD-WAN Edge| | SD-WAN Edge | || +--------------+ +--------------+ +------------+ +--------------+ || || Traffic Flow - Microsoft 365 from Branch A: || || Without Cloud On-Ramp: || Branch A --> HQ --> Internet --> Microsoft (Latency: 120ms) || || With Cloud On-Ramp: || Branch A --> Cloud Gateway --> Microsoft (Latency: 35ms) || |+-------------------------------------------------------------------------+Figure 4: Cloud on-ramp architecture showing direct cloud access versus traditional backhaul
SaaS optimisation extends beyond routing to include protocol-specific enhancements. SD-WAN vendors partner with major SaaS providers to obtain real-time endpoint information, enabling edges to direct traffic to the optimal service instance. Microsoft publishes endpoint lists that SD-WAN systems consume automatically, ensuring Teams traffic routes to the nearest Microsoft edge rather than a geographically distant datacenter.
For organisations using cloud-hosted applications, SD-WAN edges can deploy directly within cloud provider networks. A virtual edge appliance in AWS connects to physical edges at offices, extending the overlay fabric into the cloud environment. Applications running in AWS communicate with branch users through the same policy-controlled, encrypted overlay as inter-office traffic.
Multi-cloud architectures benefit from SD-WAN’s abstraction of underlying transport. An organisation with workloads in AWS, Azure, and a private datacenter connects all three through the SD-WAN overlay. Application traffic between clouds traverses encrypted tunnels with path selection based on current conditions rather than static routing configurations. The SD-WAN controller presents a unified view of connectivity regardless of where workloads reside.
Deployment models
Appliance-based deployment places dedicated SD-WAN hardware at each site. The appliance handles all edge functions including encryption, policy enforcement, and path selection. Hardware appliances provide consistent performance with dedicated processing for cryptographic operations. Sizing follows site requirements: a small branch office with 100 Mbps aggregate WAN capacity requires an appliance rated for that throughput with appropriate cryptographic acceleration.
| Deployment Model | Capital Cost | Operational Model | Suitable For |
|---|---|---|---|
| Hardware appliance (self-managed) | High upfront | Internal team manages | Organisations with network engineering capacity |
| Virtual appliance (self-managed) | Medium upfront | Internal team manages | Existing virtualisation infrastructure |
| Hardware appliance (co-managed) | Medium upfront | Vendor assists configuration | Limited internal capacity, need support |
| Cloud-delivered (managed) | Low upfront, ongoing subscription | Vendor operates fully | No internal network engineering capacity |
Virtual appliances run SD-WAN edge functions on existing server infrastructure or in cloud environments. This model suits organisations with established virtualisation platforms and available compute capacity. Performance depends on allocated resources; CPU-intensive encryption requires adequate core allocation. Virtual deployment eliminates hardware procurement and logistics for remote sites but introduces dependency on underlying infrastructure availability.
Cloud-delivered SD-WAN shifts edge functions to vendor-operated points of presence. Sites connect to the nearest vendor PoP through standard internet connections; the vendor fabric handles inter-site connectivity. This model minimises on-premises equipment to simple routers or even existing firewalls. Organisations with limited IT capacity find this model attractive because the vendor handles path optimisation, security policy management, and software updates. The tradeoff is reduced control and potential data sovereignty concerns when traffic traverses vendor infrastructure.
Co-managed arrangements blend internal control with vendor support. The organisation owns or leases edge appliances and maintains policy authority while the vendor provides monitoring, troubleshooting assistance, and change implementation support. This model suits organisations building SD-WAN expertise who need backup support during the transition period.
Field deployment considerations
Field offices in humanitarian and development contexts present connectivity challenges that SD-WAN addresses differently than headquarters or regional hub deployments. Intermittent connectivity, high-latency satellite links, and unreliable power require specific architectural accommodations.
Survivability during WAN outage ensures field offices continue operating when all external connectivity fails. SD-WAN edges cache DNS responses, maintain local routing tables, and preserve security policies during disconnection. Users access locally-hosted applications and cached cloud content without WAN dependency. When connectivity restores, the edge resynchronises with the controller and uploads queued telemetry.
High-latency links affect SD-WAN control plane operations. Satellite connections with 600ms round-trip time delay policy distribution from controller to edge. Configuration changes that propagate to terrestrial sites in seconds require tens of seconds for satellite-connected locations. The edge must handle this delay gracefully, applying consistent policies during the propagation window rather than operating with partial updates.
Bandwidth scarcity demands aggressive traffic prioritisation. A field office with 10 Mbps satellite capacity cannot afford bandwidth waste on low-priority traffic during peak operational hours. SD-WAN policies for field sites implement strict scheduling: bulk synchronisation occurs overnight when demand is low, background updates queue until off-peak windows, and bandwidth reservations guarantee minimum allocation for critical applications regardless of overall demand.
TCP performance over satellite links suffers from the bandwidth-delay product problem. Standard TCP congestion control algorithms interpret satellite latency as network congestion, throttling throughput far below link capacity. SD-WAN edges implement WAN optimisation techniques including TCP acceleration (terminating TCP locally and using optimised protocols for the WAN segment), data deduplication (avoiding retransmission of previously-seen byte sequences), and compression (reducing payload size for compressible content).
Local caching reduces WAN dependency for repeated content access. When multiple field staff access the same cloud-hosted document, the SD-WAN edge serves subsequent requests from local cache rather than fetching repeatedly over the constrained WAN link. Cache effectiveness varies by workload; document collaboration benefits substantially while real-time communication cannot cache.
Power instability requires SD-WAN edges with graceful shutdown capabilities. When UPS battery depletes, the edge must save state and shut down cleanly rather than corrupting configuration. Upon power restoration, the edge should boot autonomously and restore connectivity without manual intervention. Hardware selection for field sites prioritises power efficiency; a device consuming 15W operates longer on battery backup than one requiring 50W, and solar power systems size according to equipment consumption.
Technology options
SD-WAN implementations span open source projects, commercial products with nonprofit programmes, and managed service offerings. Selection criteria include technical capabilities, operational requirements, and total cost across deployment lifecycle.
Open source
VyOS provides router functionality including IPsec VPN, traffic shaping, and policy routing that can construct SD-WAN-like architectures. VyOS lacks integrated orchestration and application identification; organisations must build these capabilities through scripting and external tools. The approach suits technically capable teams seeking maximum flexibility without licensing costs.
OpenWrt with appropriate packages creates lightweight SD-WAN edges for small sites. The platform runs on commodity hardware including consumer routers, reducing equipment cost for locations where performance requirements are modest. Management requires custom tooling; no central controller exists in the open source ecosystem comparable to commercial offerings.
Flexiwan offers open source SD-WAN with central management. The project provides application identification, multi-link support, and orchestration capabilities approaching commercial products. Community support varies; organisations should assess their tolerance for self-support before deployment.
Commercial with nonprofit programmes
Major SD-WAN vendors offer reduced pricing or donated licenses to qualifying organisations:
| Vendor | Nonprofit Offering | Capabilities | Considerations |
|---|---|---|---|
| Cisco Meraki | TechSoup availability | Full-featured cloud-managed | US-headquartered; cloud management mandatory |
| Fortinet | Nonprofit pricing | Integrated security features | Complexity suits larger deployments |
| VMware VeloCloud | Case-by-case | Strong cloud integration | Requires VMware relationship |
| Silver Peak (HPE Aruba) | Partner-dependent | WAN optimisation strength | Acquisition may affect programme |
Vendor selection involves jurisdictional considerations beyond technical features. Organisations operating in contexts where US-headquartered vendor relationships create risk should evaluate alternatives. Cloud-managed platforms route telemetry and configuration through vendor infrastructure, creating data exposure that self-managed alternatives avoid.
Managed service providers
Telecommunications carriers and managed service providers offer SD-WAN as a service, bundling connectivity with overlay network management. This model simplifies procurement by consolidating WAN transport and SD-WAN technology under single contracts. Organisations surrender visibility and control in exchange for operational simplicity.
Service provider offerings suit organisations without network engineering staff who need reliable WAN connectivity and cannot justify building internal SD-WAN expertise. The provider handles equipment deployment, configuration management, and troubleshooting. Contract terms typically span 3-5 years with bandwidth and feature tiers determining pricing.
Migration from traditional WAN
Transitioning from MPLS or VPN-based WAN to SD-WAN proceeds through phases that maintain connectivity while introducing new capabilities.
Assessment inventories existing WAN circuits, bandwidth utilisation patterns, and application requirements. SD-WAN sizing derives from peak utilisation data adjusted for growth projections. Sites with 80% utilisation on 100 Mbps circuits need SD-WAN capacity exceeding 100 Mbps to accommodate both current traffic and overhead. Assessment also identifies applications that require specific path characteristics, informing traffic steering policy design.
Parallel operation deploys SD-WAN alongside existing WAN infrastructure. Initial deployment routes non-critical traffic through SD-WAN while production applications continue using established paths. This phase validates SD-WAN performance, refines traffic policies, and builds operational familiarity before production cutover.
Progressive migration shifts application classes to SD-WAN according to criticality and risk tolerance. Background traffic migrates first, followed by standard business applications, then communication tools, and finally critical systems. Each phase includes performance validation before proceeding. Rollback procedures maintain capability to return traffic to traditional WAN if SD-WAN issues emerge.
Circuit optimisation follows successful migration. Expensive MPLS circuits that provided both transport and implicit prioritisation become redundant when SD-WAN handles traffic steering. Organisations can reduce MPLS bandwidth, transition circuits to lower-cost internet, or eliminate MPLS entirely in favour of diverse internet connections. Cost savings from circuit optimisation often fund SD-WAN investment within 12-24 months.
Maintaining hybrid connectivity during transition requires careful coordination. SD-WAN edges must interoperate with traditional routers at sites not yet migrated. Policy consistency across both environments prevents traffic blackholes and routing asymmetry. The migration timeline balances urgency for cost savings against risk from rushed deployment.
Implementation considerations
For organisations with limited IT capacity
Cloud-delivered SD-WAN services minimise operational burden while providing connectivity improvements over basic internet VPN. Selection criteria should emphasise management simplicity, vendor support quality, and total cost including support contracts. Avoid platforms requiring deep networking expertise for routine operations.
Start with a pilot connecting 2-3 sites before broad deployment. The pilot validates that the selected platform meets requirements and builds familiarity before scaling. Select pilot sites with representative characteristics: one high-bandwidth headquarters or regional office, one standard branch, and one challenging field location if applicable.
Document traffic requirements by speaking with application owners rather than assuming network team knowledge is complete. Users often employ cloud applications that IT has not formally catalogued; SD-WAN traffic visibility during pilot operation reveals actual usage patterns.
For organisations with established IT functions
Self-managed SD-WAN provides maximum control and flexibility. Hardware or virtual appliances with central orchestration suit organisations with network engineering capacity. Plan for integration with existing security infrastructure, monitoring systems, and change management processes.
Design traffic policies based on measured application requirements rather than vendor defaults. Real-time communication genuinely needs low latency; labelling all traffic as critical defeats prioritisation. Establish 3-4 traffic classes with distinct requirements rather than granular per-application policies that become unmaintainable.
Consider SD-WAN as part of broader network modernisation including Zero Trust Network Access for user access and Network Architecture refresh for LAN segments. The technologies complement each other; planning them together produces coherent architecture.
For field-intensive operations
Field deployment dominates SD-WAN architecture decisions for organisations with significant remote operations. Select platforms with proven satellite and LTE integration, robust offline operation, and low power consumption. Hardware durability matters; consumer-grade equipment fails in harsh conditions regardless of software capabilities.
Build conservative bandwidth assumptions. Satellite capacity is expensive and constrained; SD-WAN cannot manufacture bandwidth, only optimise its use. A 10 Mbps VSAT link shared among 20 users provides 500 Kbps per user at full contention. Set user expectations accordingly and implement policies that protect critical application access.
Test failover extensively before deployment. Simulate link failures at various points in the pilot to verify that traffic redistributes correctly and applications remain functional. Field staff depend on connectivity for safety and operational effectiveness; SD-WAN failures in remote locations have consequences beyond inconvenience.