Skip to main content

Network Architecture

Network architecture establishes the structural framework through which devices, applications, and users communicate across an organisation. The architecture encompasses physical topology (how devices connect), logical topology (how traffic flows), addressing schemes (how devices are identified), and routing mechanisms (how packets find their destinations). For mission-driven organisations operating across headquarters, country offices, and field sites spanning multiple continents, network architecture determines whether staff can access critical systems reliably, whether bandwidth reaches those who need it, and whether the organisation can adapt its infrastructure as programmes scale or contract.

Network topology
The arrangement of network elements and their interconnections. Physical topology describes cable and hardware placement; logical topology describes data flow paths regardless of physical layout.
LAN (Local Area Network)
A network serving a single geographic location such as an office building, typically owned and operated by the organisation using it.
WAN (Wide Area Network)
A network connecting geographically dispersed locations, typically using carrier services rather than organisation-owned infrastructure between sites.
Subnet
A logical subdivision of an IP network that groups devices for addressing efficiency, traffic management, and security isolation.
VLAN (Virtual LAN)
A broadcast domain created by switches that partitions a physical network into multiple logical networks, enabling segmentation without separate physical infrastructure.

Design principles for distributed organisations

Network architecture for mission-driven organisations differs from commercial enterprise design in several structural ways. Programme-driven expansion creates networks that grow organically as new country offices or field sites open, rather than following planned campus buildouts. Funding cycles impose constraints where multi-year infrastructure investments compete against programme delivery for limited resources. Operating contexts range from stable headquarters with redundant fibre connections to field sites dependent on satellite links with 600ms latency and 2 Mbps throughput.

The principle of graceful degradation shapes architecture decisions at every level. A network serving humanitarian operations must continue functioning when primary links fail, when bandwidth drops below planned capacity, and when individual sites lose connectivity entirely. This principle manifests in topology choices that provide alternate paths, in application selection that tolerates latency and disconnection, and in local caching that enables work to continue during outages.

Appropriate complexity balances capability against operational sustainability. A headquarters network supporting 200 staff, multiple VLANs, redundant internet connections, and sophisticated monitoring represents appropriate complexity for a location with dedicated IT staff and reliable power. The same design deployed to a field office with intermittent electricity, no local IT support, and a single mobile data connection would fail within weeks. Each network segment requires complexity matched to local operational capacity.

Centralised policy with distributed implementation enables consistency without requiring every site to maintain identical infrastructure. Security policies, naming conventions, and addressing schemes originate from headquarters or regional hubs. Implementation varies by site capability: a country office might run local Active Directory domain controllers replicating from headquarters, while a field site uses cached credentials with periodic synchronisation when connectivity permits.

LAN architecture patterns

Local area networks within mission-driven organisations follow distinct patterns based on site function and scale. Understanding these patterns enables consistent design across diverse locations while accommodating local constraints.

Headquarters and large office pattern

Headquarters networks serve the largest user populations and host centralised services accessed by the entire organisation. The architecture employs a hierarchical model with core, distribution, and access layers. Core switches provide high-speed backbone connectivity between distribution switches and to data centre resources. Distribution switches aggregate access layer connections and enforce policy boundaries between network segments. Access switches connect end-user devices and typically correspond to physical areas such as floors or wings.

+------------------------------------------------------------------------+
| HEADQUARTERS NETWORK ARCHITECTURE |
+------------------------------------------------------------------------+
| |
| TIER 1: CORE LAYER (The Backbone) |
| High-speed transport. Redundant pair. |
| |
| +-----------+ +-----------+ |
| | CORE SW 1 |-----------| CORE SW 2 | |
| +-----------+ +-----------+ |
| | | |
| +-------------+-----------+-----------+-------------+ |
| | | | |
| v v v |
| |
| TIER 2: DISTRIBUTION LAYER (Policy Boundary) |
| Aggregates access switches, defines VLANs & Subnets. |
| |
| +--------------+ +--------------+ +--------------+ |
| | DIST SW 1 | | DIST SW 2 | | DIST SW 3 | |
| | (Floors 1-2) | | (Floors 3-4) | | (Data Ctr) | |
| +--------------+ +--------------+ +--------------+ |
| | | | |
| +---+---+ +---+---+ +---+---+ |
| | | | | | | |
| v v v v v v |
| |
| TIER 3: ACCESS LAYER (Local Connectivity) |
| Connects end devices. |
| |
| +----+ +----+ +----+ +----+ +----+ +----+ |
| | A1 | | A2 | | A3 | | A4 | | S1 | | S2 | |
| +----+ +----+ +----+ +----+ +----+ +----+ |
| | | | | | | |
| (PCs) (Phones) (APs) (IoT) (Web) (DB) |
| |
+------------------------------------------------------------------------+

Figure 1: Three-tier hierarchical LAN architecture for headquarters

This pattern provides predictable traffic paths, clear failure domains, and straightforward capacity planning. When an access switch fails, only devices connected to that switch lose connectivity. When a distribution switch fails, redundant uplinks to the core maintain connectivity for the affected access switches. Core switch failure triggers failover to the redundant core with sub-second convergence using protocols such as Virtual Router Redundancy Protocol (VRRP) or proprietary equivalents.

Country office pattern

Country offices typically serve 20-80 users and may host local servers for applications requiring low latency or offline capability. The architecture collapses the three-tier model into two tiers, with a core/distribution layer and an access layer. Smaller offices collapse further to a single layer where one or two switches serve all functions.

+------------------------------------------------------------------------+
| COUNTRY OFFICE NETWORK |
+------------------------------------------------------------------------+
| |
| +------------------------+ |
| | CORE/DISTRIBUTION | |
| | | |
| | +------+ +------+ | |
| | | SW1 |----| SW2 | | (stacked or clustered) |
| | +------+ +------+ | |
| +----------+-------------+ |
| | |
| +---------------+---------------+ |
| | | | |
| +----v----+ +----v----+ +----v----+ |
| | A1 | | A2 | | Server | |
| | (Wing A)| | (Wing B)| | Rack | |
| +---------+ +---------+ +---------+ |
| |
| +------------------+ +------------------+ |
| | Router/Firewall | | WAN Connection | |
| | (edge device) |----| (fibre/leased) | |
| +------------------+ +------------------+ |
| |
+------------------------------------------------------------------------+

Figure 2: Two-tier LAN architecture for country offices

The core/distribution switches in this pattern handle routing between VLANs, enforce access control between segments, and provide uplinks to WAN connectivity. Stacking two switches into a single logical unit simplifies management while providing hardware redundancy. Access switches may be standalone units in separate building wings or floor-mounted switches in smaller configurations.

Field office pattern

Field offices operate under constraints that invalidate assumptions underlying larger network designs. Power reliability, equipment cooling, local support availability, and physical security all affect architecture decisions. The pattern emphasises simplicity, resilience to environmental factors, and operation without local IT expertise.

A field office network typically consists of a single integrated device combining routing, switching, wireless access point, and firewall functions. This convergence reduces failure points, simplifies sparing, and enables replacement by non-technical staff. Where the environment permits and requirements justify additional capacity, a separate wireless access point extends coverage.

+------------------------------------------------------------------------+
| FIELD OFFICE NETWORK |
+------------------------------------------------------------------------+
| |
| +------------------+ +------------------+ |
| | Connectivity | | Power | |
| | (VSAT/Mobile/ |-----| (Solar/UPS/ | |
| | Local ISP) | | Generator) | |
| +--------+---------+ +------------------+ |
| | |
| +--------v---------+ |
| | Integrated | |
| | Router/Switch/ | |
| | Wireless/ | |
| | Firewall | |
| +--------+---------+ |
| | |
| +-------+-------+ |
| | | | |
| +--v--+ +--v--+ +--v--+ |
| |Wired| | WiFi| |Local| |
| |Ports| | AP | |Srvr | (optional local cache/apps) |
| +-----+ +-----+ +-----+ |
| |
+------------------------------------------------------------------------+

Figure 3: Simplified field office network architecture

The integrated device connects to whatever WAN service is available: satellite terminal, mobile router, or local ISP connection. Power resilience through UPS or solar/battery systems protects against the outages common in field contexts. An optional local server provides caching and offline-capable applications when connectivity is intermittent.

WAN architecture and topology

Wide area networks connect an organisation’s distributed locations into a coherent whole. The topology chosen affects resilience, performance, cost, and operational complexity. Mission-driven organisations typically evolve through multiple WAN topologies as they grow, and many operate hybrid topologies combining different patterns across their network.

Hub-and-spoke topology

Hub-and-spoke topology connects all remote sites to a central hub, typically headquarters or a regional data centre. Traffic between spoke sites traverses the hub. This topology minimises circuit costs because each site requires only one WAN connection. Management centralises at the hub, simplifying monitoring and policy enforcement.

HEADQUARTERS
(Hub Site)
|
+------v------+
| Central |
| Router |
+------+------+
|
+------------+--------------+--------------+------------+
| | | | |
+-----v-----+ +----v----+ +-----v-----+ +----v----+ +-----v-----+
| Country | | Country | | Regional | | Country | | Country |
| Office | | Office | | Hub | | Office | | Office |
| East | | West | | (Asia) | | North | | South |
| Africa | | Africa | | | | America | | America |
+-----------+ +---------+ +-----+-----+ +---------+ +-----------+
|
+--------+--------+
| | |
+-----v--+ +---v----+ +-v------+
| Field | | Field | | Field |
| Site 1 | | Site 2 | | Site 3 |
+--------+ +--------+ +--------+

Figure 4: Hub-and-spoke WAN topology with regional sub-hub

The primary limitation of hub-and-spoke topology emerges when spoke sites need to communicate directly. A video call between two country offices in the same region traverses the hub, consuming bandwidth on both spoke circuits and adding latency. For organisations where most traffic flows between sites and centralised cloud services or headquarters-hosted applications, this pattern works well. The regional sub-hub variation addresses latency concerns by placing intermediate hubs in regions with multiple sites, reducing the distance traffic must travel.

Mesh topology

Mesh topology provides direct connections between sites, eliminating hub dependency for inter-site traffic. A full mesh connects every site to every other site; a partial mesh connects sites with significant inter-site traffic while maintaining hub connectivity for sites with minimal direct communication needs.

+------------------------------------------------------------------------+
| PARTIAL MESH WAN |
+------------------------------------------------------------------------+
| |
| +------------+ |
| | HQ | |
| +-----+------+ |
| | |
| +------------------+------------------+ |
| | | | |
| +-----v-----+ +-----v-----+ +-----v-----+ |
| | Regional |======| Regional |======| Regional | |
| | Hub EU | | Hub APAC | | Hub AMER | |
| +-----+-----+ +-----+-----+ +-----+-----+ |
| | | | |
| +-----+-----+ +-----+-----+ +-----+-----+ |
| | | | | | | | | | |
| +v+ +v+ +v+ +v+ +v+ +v+ +v+ +v+ +v+ |
| |A| |B| |C| |D| |E| |F| |G| |H| |I| |
| +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ |
| Country offices Country offices Country offices |
| |
| ====== Direct regional mesh links |
| ------ Hub-and-spoke links |
+------------------------------------------------------------------------+

Figure 5: Partial mesh WAN topology with regional interconnection

The partial mesh illustrated connects regional hubs directly to each other while country offices connect only to their regional hub in a spoke arrangement. This pattern suits organisations where regional collaboration is common: staff in European country offices frequently collaborate and benefit from direct regional routing, while inter-regional traffic acceptably traverses the headquarters hub.

Full mesh becomes impractical as site count increases. Connecting n sites requires n(n-1)/2 circuits. Ten sites require 45 circuits; twenty sites require 190 circuits. The operational overhead of managing hundreds of circuits, combined with the cost, limits full mesh to organisations with very few sites or specific high-bandwidth inter-site requirements.

Topology selection considerations

The choice between hub-and-spoke and mesh topologies depends on traffic patterns, latency requirements, cost constraints, and operational capacity. Traffic flow analysis reveals where direct connections provide benefit. If 80% of traffic from all sites flows to headquarters-hosted services or cloud applications accessed via headquarters internet egress, hub-and-spoke serves well. If country offices within a region routinely collaborate through video conferencing and file sharing, regional mesh connections reduce latency and improve user experience.

Latency requirements derive from application characteristics. Interactive applications such as voice, video, and remote desktop degrade noticeably above 150ms round-trip latency. A hub-and-spoke topology connecting Nairobi and Kampala through a London hub adds 200ms or more compared to a direct regional connection. Asynchronous applications such as email and file synchronisation tolerate higher latency without user-perceptible impact.

Cost comparison requires accounting for circuit costs, equipment at each site, and operational overhead. Hub-and-spoke minimises circuits but concentrates bandwidth requirements at the hub. Mesh distributes bandwidth but multiplies circuits. For organisations using software-defined WAN technology, overlay mesh connections across internet circuits may cost less than dedicated circuits while providing mesh benefits.

Network segmentation

Network segmentation divides a flat network into isolated zones that limit broadcast traffic, contain security incidents, and enforce access policies between different user populations or system types. Segmentation implements the principle of least privilege at the network layer: systems access only the network segments necessary for their function.

Segmentation mechanisms

VLANs provide Layer 2 segmentation by creating separate broadcast domains on shared switch infrastructure. Devices in different VLANs cannot communicate directly even when connected to the same physical switch; traffic between VLANs must traverse a router or Layer 3 switch. VLAN segmentation requires no additional physical infrastructure beyond switches capable of VLAN tagging (standard in enterprise switches for over two decades).

Subnet segmentation at Layer 3 divides IP address space into ranges assigned to different network zones. Routing and firewall rules control traffic between subnets. Subnet boundaries typically align with VLAN boundaries, creating both Layer 2 isolation (separate broadcast domains) and Layer 3 isolation (routing control).

Physical segmentation uses separate network infrastructure for different security domains. Air-gapped networks for handling highly sensitive data have no connection to other networks. Physically separate networks for guest wireless access prevent any possibility of configuration error bridging guest and corporate networks. Physical segmentation costs more and complicates management but provides the strongest isolation guarantee.

Segmentation architecture

A segmentation architecture for a mission-driven organisation typically includes zones for user access, servers and applications, management infrastructure, guest and partner access, and IoT or operational technology. The zone structure balances security isolation against operational complexity.

+-------------------------------------------------------------------------+
| NETWORK SEGMENTATION ZONES |
+-------------------------------------------------------------------------+
| |
| +--------------------+ +--------------------+ |
| | USER ZONE | | SERVER ZONE | |
| | | | | |
| | - Staff endpoints | | - Application | |
| | - VoIP phones | <---> | servers | |
| | - Printers | (routed)| - File shares | |
| | | | - Local services | |
| | VLAN 10: 10.1.0/24 | | VLAN 20: 10.2.0/24 | |
| +--------------------+ +--------------------+ |
| | | |
| | +--------------------+ |
| | | MANAGEMENT ZONE | |
| | | | |
| +-------->| - Switches/routers |<--------+ |
| (limited) | - Management ports | | |
| | - Monitoring | | |
| | | | |
| | VLAN 99: 10.99.0/24| | |
| +--------------------+ | |
| | |
| +--------------------+ +--------------------+ |
| | GUEST ZONE | | DMZ ZONE | |
| | | | | |
| | - Visitor WiFi | | - Web servers | |
| | - Partner access | (no | - External DNS | |
| | - Contractor WiFi | access)| - Mail relay | |
| | | | | |
| | VLAN 50: 10.50.0/24| | VLAN 30: 10.3.0/24 | |
| +--------------------+ +--------------------+ |
| |
+-------------------------------------------------------------------------+

Figure 6: Network segmentation zone architecture

The user zone contains staff workstations, laptops, and user-facing devices. This zone has controlled access to the server zone for application access and no access to the management zone except for designated IT staff. VoIP phones may occupy a separate VLAN within the user zone to enable QoS prioritisation.

The server zone hosts internal applications, file servers, and databases. Access from the user zone traverses firewall rules permitting specific application protocols. Direct server-to-server traffic within the zone uses standard routing. External access to servers typically passes through the DMZ zone rather than exposing server zone resources directly.

The management zone contains network infrastructure management interfaces, out-of-band management networks, and monitoring systems. Access restrictions limit connectivity to authorised IT administrators from designated jump hosts or management workstations. Separating management traffic prevents compromised user devices from attacking network infrastructure.

The guest zone provides internet access for visitors without access to internal resources. Implementation ranges from a simple VLAN with internet-only routing to captive portal systems requiring acceptance of usage terms. Guest networks use different SSID, IP ranges, and internet egress than corporate networks.

The DMZ zone positions externally accessible services between the internet and internal networks. Web servers, external DNS, and mail relays reside in the DMZ. Firewall rules permit inbound traffic from the internet to specific DMZ services and controlled traffic from DMZ to internal zones for backend database access or mail delivery.

IP addressing and subnet planning

IP addressing schemes assign unique addresses to devices and structure address space to support routing, segmentation, and scalability. A well-designed addressing scheme remains stable as the organisation grows, supports summarisation for efficient routing, and provides clear visual indication of device location and function.

Address space allocation

Private IPv4 address ranges defined in RFC 1918 provide address space for internal networks: 10.0.0.0/8 offers 16.7 million addresses, 172.16.0.0/12 offers 1 million addresses, and 192.168.0.0/16 offers 65,536 addresses. Most organisations use the 10.0.0.0/8 range for its addressing flexibility.

A hierarchical allocation assigns address blocks to regions, sites within regions, and segments within sites. This structure enables route summarisation where a single routing entry represents all subnets within a site or region.

+-------------------------------------------------------------------------+
| IP ADDRESS HIERARCHY EXAMPLE |
+-------------------------------------------------------------------------+
| |
| 10.0.0.0/8 - Organisation allocation |
| | |
| +-- 10.1.0.0/16 - Headquarters |
| | | |
| | +-- 10.1.1.0/24 - User VLAN Floor 1 |
| | +-- 10.1.2.0/24 - User VLAN Floor 2 |
| | +-- 10.1.10.0/24 - Server VLAN |
| | +-- 10.1.99.0/24 - Management VLAN |
| | |
| +-- 10.2.0.0/16 - Europe region |
| | | |
| | +-- 10.2.1.0/24 - UK office |
| | +-- 10.2.2.0/24 - Geneva office |
| | +-- 10.2.3.0/24 - Berlin office |
| | |
| +-- 10.3.0.0/16 - East Africa region |
| | | |
| | +-- 10.3.1.0/24 - Kenya office |
| | +-- 10.3.2.0/24 - Uganda office |
| | +-- 10.3.3.0/24 - Ethiopia office |
| | +-- 10.3.10.0/27 - Kenya field site 1 |
| | +-- 10.3.10.32/27 - Kenya field site 2 |
| | |
| +-- 10.4.0.0/16 - West Africa region |
| +-- 10.5.0.0/16 - Asia region |
| +-- 10.6.0.0/16 - Americas region |
| |
+-------------------------------------------------------------------------+

Figure 7: Hierarchical IP addressing scheme

The hierarchy reserves 10.1.0.0/16 for headquarters (65,536 addresses, far exceeding current needs but allowing growth). Each region receives a /16 block. Within regions, country offices receive /24 blocks (254 usable addresses) while field sites receive smaller /27 blocks (30 usable addresses). The structure remains readable: any address starting with 10.3 belongs to the East Africa region; 10.3.1.x addresses are in the Kenya office.

Subnet sizing

Subnet size balances address efficiency against growth accommodation and operational simplicity. A /24 subnet providing 254 addresses serves most office environments. Larger subnets (/23 with 510 addresses, /22 with 1,022 addresses) suit headquarters or data centre environments. Smaller subnets (/27 with 30 addresses, /28 with 14 addresses) suit field sites, point-to-point links, and small server pools.

Worked example: A country office with 45 current staff expects to grow to 80 staff within five years. The office requires VLANs for users, servers, VoIP, guest wireless, and management. Address allocation:

SegmentCurrent devicesGrowth estimateSubnet sizeAddress range
Users50100/24 (254)10.3.1.0/24
Servers815/27 (30)10.3.1.128/27
VoIP3580/25 (126)10.3.1.160/25
Guest2040/26 (62)10.3.2.0/26
Management1015/27 (30)10.3.2.64/27

The user subnet accommodates 2x growth from current device count. The VoIP subnet anticipates one phone per staff plus conference room units. Server and management subnets use smaller allocations reflecting their bounded growth. Guest subnet size reflects meeting room capacity and typical visitor density.

Reserved addresses and documentation

Each subnet reserves addresses for standard functions. The first usable address (.1) typically serves as the default gateway. Addresses near the top of the range (.250-.254) reserve for network infrastructure such as switches and access points. DHCP pools exclude reserved ranges and allocate from the middle of the range.

Address documentation tracks allocations, reservations, and assignments. Spreadsheet-based documentation suffices for small organisations but becomes unwieldy beyond 20 sites or 1,000 addresses. IP Address Management (IPAM) systems provide database-backed tracking, integration with DNS and DHCP, and reporting on utilisation. Open source IPAM options include NetBox, phpIPAM, and Nautobot.

Routing architecture

Routing determines how packets traverse networks to reach their destinations. Within a single site, routing typically occurs at Layer 3 switches connecting VLANs. Across the WAN, routers at each site exchange routing information to maintain connectivity as links fail or recover.

Static versus dynamic routing

Static routing configures fixed paths to destination networks. An administrator defines that traffic for 10.3.0.0/16 exits via a specific interface toward the East Africa regional hub. Static routes require manual updates when topology changes. For small networks with few routes and stable topologies, static routing provides simplicity and predictability.

Dynamic routing protocols enable routers to exchange information and automatically calculate optimal paths. When a link fails, routers detect the failure and recalculate routes using remaining paths. Dynamic routing reduces manual configuration and provides automatic failover but introduces protocol complexity and potential convergence delays during topology changes.

Routing protocol selection

The routing protocol selected depends on network scale, topology complexity, and operational requirements. Interior Gateway Protocols (IGPs) route within an organisation’s network; Exterior Gateway Protocols route between organisations (primarily BGP for internet routing).

OSPF (Open Shortest Path First) suits most enterprise networks. OSPF routers maintain a complete topology database and calculate shortest paths using the Dijkstra algorithm. Convergence after topology changes completes within seconds on modern hardware. OSPF scales well with proper area design: large networks divide into areas to limit topology database size and calculation overhead.

EIGRP (Enhanced Interior Gateway Routing Protocol) provides features comparable to OSPF with different convergence characteristics. Originally Cisco proprietary, EIGRP became an open standard (RFC 7868) but remains most common in Cisco-dominated environments.

RIP (Routing Information Protocol) represents an older distance-vector approach with limitations that restrict its use to small, simple networks. Maximum hop count of 15 prevents use in larger topologies. Slow convergence after changes causes temporary routing loops. RIP remains relevant only for legacy compatibility.

For WAN environments connecting sites across multiple carriers, BGP (Border Gateway Protocol) provides policy-based routing control. BGP enables organisations to influence inbound traffic distribution, prefer certain carriers for specific destinations, and maintain connectivity when primary circuits fail. BGP configuration complexity exceeds IGP protocols substantially; many organisations rely on managed router services from carriers rather than operating BGP themselves.

Routing for resilience

Redundant paths combined with dynamic routing provide automatic failover when primary circuits fail. A headquarters with two internet circuits configures BGP to advertise routes through both providers; failure of one provider triggers traffic shift to the survivor within seconds. A country office with primary fibre and backup mobile connection runs OSPF or static routes with failover detection; primary link failure activates the backup path.

Failover timing depends on detection mechanisms. Routing protocol keepalives detect failures within seconds to tens of seconds depending on timer configuration. Bidirectional Forwarding Detection (BFD) provides sub-second failure detection by exchanging probes at millisecond intervals. For circuits where fast failover justifies the configuration complexity, BFD integration with routing protocols enables convergence in under one second.

Quality of Service

Quality of Service mechanisms prioritise traffic to ensure latency-sensitive applications perform acceptably when network capacity is constrained. Without QoS, a large file transfer can consume available bandwidth and cause voice calls to break up or video to freeze. QoS configuration marks packets by application type and schedules transmission to favour high-priority traffic.

Traffic classification and marking

Traffic classification identifies packets by application for differential treatment. Classification mechanisms examine packet headers, port numbers, and payload signatures. Classification occurs at network ingress points: access switches, wireless controllers, or WAN edge routers.

DSCP (Differentiated Services Code Point) marking uses a 6-bit field in the IP header to indicate traffic priority. Standard DSCP values include EF (Expedited Forwarding, decimal 46) for voice, AF classes (Assured Forwarding) for video and business applications, and BE (Best Effort, decimal 0) for general traffic. Consistent DSCP marking across the network requires classification at ingress and preservation of marks through switching and routing.

Classification examples:

  • Voice RTP streams (UDP ports 16384-32767): Mark DSCP EF
  • Video conferencing (H.323, SIP): Mark DSCP AF41
  • Business applications (ERP, CRM): Mark DSCP AF21
  • Web browsing, email: Mark DSCP BE (default)
  • Bulk transfer, backup: Mark DSCP CS1 (lower than default)

Queuing and scheduling

Switches and routers place marked packets into queues corresponding to their priority class. Scheduling algorithms determine transmission order from queues. Strict priority queuing always transmits from the highest-priority queue first, ensuring voice packets experience minimal delay. Weighted fair queuing allocates bandwidth proportionally across queues, preventing lower-priority traffic from starvation while still favouring higher priorities.

Typical enterprise configuration combines strict priority for voice with weighted scheduling for remaining traffic classes. Voice traffic enters a priority queue with guaranteed bandwidth allocation (commonly 10-20% of link capacity). Video and business application queues receive weighted allocations. Best-effort and bulk traffic share remaining capacity.

QoS provides greatest benefit on constrained WAN links where congestion occurs regularly. A 100 Mbps LAN rarely requires QoS because bandwidth exceeds typical demand. A 10 Mbps WAN link serving a country office benefits substantially from QoS ensuring voice and critical applications receive priority.

Configuration must address both inbound and outbound traffic. Organisations directly control outbound queuing on their routers. Inbound traffic arrives already queued by the upstream carrier; influencing inbound QoS requires carrier coordination or traffic shaping at the source.

For satellite links and other high-latency circuits, QoS interacts with protocol behaviour. TCP congestion control algorithms respond to packet loss and delay by reducing transmission rate. Voice over IP depends on consistent latency rather than raw bandwidth. QoS configuration for satellite links prioritises voice and interactive traffic while allowing TCP-based applications to utilise remaining capacity.

Network documentation

Network documentation enables troubleshooting, change planning, and knowledge transfer. Documentation that exists only in the memory of individuals disappears when those individuals leave. Documentation in disconnected files becomes outdated and contradictory. Effective documentation maintains accuracy through processes that update records when changes occur.

Documentation components

Physical documentation records what equipment exists, where it is located, and how it connects. Equipment inventories list devices by site with model, serial number, location, and support status. Rack diagrams show equipment placement and power distribution. Cabling records document patch panel ports, cable runs, and endpoint connections.

Logical documentation records addressing, VLANs, routing, and traffic flow. IP address allocation tables track subnet assignments and reservations. VLAN databases list VLAN IDs, names, and purposes across all switches. Routing topology diagrams show router interconnections and dynamic routing areas. Firewall rule documentation explains the purpose and business justification for each rule.

Configuration documentation captures device configurations for recovery and audit. Configuration backup systems automatically capture running configurations and archive historical versions. Change documentation records what changed, when, why, and who authorised the change.

Documentation systems

Documentation systems range from file shares with spreadsheets to integrated infrastructure management platforms. The appropriate choice depends on network scale and documentation discipline.

For organisations with fewer than 10 sites, a structured folder hierarchy containing standard documents often suffices. Configuration backups automate through scripts or router features. Address tracking uses spreadsheets with clear ownership. Diagrams use standard tools (draw.io, Visio, Lucidchart) with files stored centrally.

For larger networks, integrated platforms provide database-backed documentation with relationships between records. NetBox, an open source infrastructure resource modelling platform, tracks IP addresses, devices, sites, circuits, and their relationships. Integration with network automation tools enables documentation updates from discovered configuration.

Regardless of tooling, documentation accuracy requires process discipline. Changes to the network must trigger documentation updates. Pre-change review includes documentation impact assessment. Post-change verification confirms documentation reflects reality.

Capacity planning

Capacity planning ensures network infrastructure can support current demand and anticipated growth. Under-provisioned networks create bottlenecks that degrade application performance and user experience. Over-provisioned networks waste budget on unused capacity.

Baseline measurement

Capacity planning begins with understanding current utilisation. Monitoring systems collect interface utilisation, packet rates, and error counts from network devices. SNMP polling provides periodic samples (typically every 5 minutes). Flow data from NetFlow or sFlow provides traffic composition detail: which applications, sources, and destinations consume bandwidth.

Baseline analysis identifies normal patterns and peak utilisation periods. A WAN circuit averaging 40% utilisation may peak at 85% during morning hours when staff access email and collaboration tools. Capacity evaluation considers peak utilisation rather than averages; users experience congestion during peaks regardless of daily average.

Growth estimation

Growth estimation combines organisational planning data with historical trends. Programme expansion plans indicate new offices, staff increases, or new application deployments. Historical trends reveal organic growth patterns: if bandwidth utilisation increased 25% annually for three years, similar growth likely continues absent major changes.

Application changes drive bandwidth requirements beyond user growth. Migration from on-premises file servers to cloud storage shifts traffic from LAN to WAN. Adoption of video conferencing platforms substantially increases WAN consumption. Capacity planning must account for planned application changes rather than assuming static per-user bandwidth.

Upgrade planning

Upgrade planning translates capacity analysis into infrastructure changes. Planning horizons typically span 2-3 years for network infrastructure given equipment lifecycle and procurement lead times.

Trigger thresholds prompt upgrade evaluation when utilisation exceeds defined levels. A circuit consistently exceeding 70% utilisation during business hours warrants upgrade evaluation. Exceeding 85% indicates near-term action required; approaching 100% causes packet loss and application degradation.

Upgrade options include capacity increases (faster circuits, additional links, equipment upgrades) and optimisation (WAN compression, caching, application changes). Cost-benefit analysis compares upgrade costs against optimisation alternatives. For a 10 Mbps circuit at 80% utilisation, doubling capacity to 20 Mbps might cost £500/month additional; deploying WAN optimisation appliances might reduce utilisation to 50% for £300/month. The choice depends on long-term growth expectations and total cost of ownership.

Legacy network integration

Few organisations build networks from clean-slate designs. Existing infrastructure, acquired through organisational mergers, inherited from previous IT administrations, or accumulated through programme growth, presents integration challenges. Legacy network integration preserves operational continuity while systematically modernising infrastructure.

Discovery and assessment

Integration begins with discovering what exists. Network discovery scans identify active devices, their addresses, and apparent functions. Configuration review examines routing, switching, and security configurations. Physical inspection identifies undocumented equipment, cabling, and connections.

Assessment categorises discovered infrastructure. Equipment still under support and meeting requirements continues in place. Equipment approaching end-of-life enters replacement planning. Equipment posing security risks (unpatched, unsupported, or misconfigured) requires immediate remediation or isolation.

Integration approaches

Parallel operation maintains legacy and new infrastructure simultaneously during transition. Users migrate in phases; the legacy network remains available for fallback. This approach minimises risk but doubles operational overhead during the transition period.

In-place upgrade replaces components within the existing architecture. Switches upgrade individually; routing migrates from legacy protocols to modern standards. This approach minimises disruption but constrains modernisation to what the existing architecture supports.

Phased migration combines both approaches. A new network core deploys alongside legacy infrastructure. Sites migrate to the new core over time. Legacy equipment retires as sites complete migration. This approach suits organisations with substantial legacy infrastructure requiring architectural change rather than component refresh.

Common integration challenges

Address space conflicts arise when legacy networks use overlapping IP ranges. Two merged organisations both using 192.168.1.0/24 cannot directly route between themselves. Resolution options include NAT (Network Address Translation) to translate one range, or renumbering one network to use non-overlapping addresses. Renumbering is disruptive but provides a clean long-term solution; NAT adds complexity and creates ongoing operational overhead.

Protocol incompatibility occurs when legacy equipment cannot participate in modern network services. Switches lacking VLAN support cannot participate in segmented networks. Routers running only RIP cannot participate in OSPF domains. Resolution requires equipment replacement, protocol translation at boundaries, or acceptance of reduced functionality for legacy segments.

Documentation gaps characterise most legacy network integration. Previous administrators may have left incomplete records or none at all. Discovery tools, configuration analysis, and methodical documentation build the understanding necessary for integration planning.

Implementation considerations

For organisations with limited IT capacity

Network architecture decisions for organisations without dedicated network engineers should favour simplicity and standardisation. Select equipment from a single vendor for consistency in management interfaces and support processes. Use managed switches with default configurations where requirements permit. Document configurations thoroughly given that troubleshooting may involve external support unfamiliar with the environment.

Cloud-managed networking platforms reduce on-premises complexity by centralising management and monitoring. Vendors including Cisco Meraki, Aruba Central, and Ubiquiti UniFi provide controller platforms with varying cost structures and nonprofit availability. The trade-off exchanges local control and flexibility for simplified operations and vendor dependency.

A minimal viable network architecture for small organisations might comprise a single firewall/router at internet edge, a managed switch for LAN connectivity, and a wireless access point. This configuration serves up to 30-40 users. Scaling beyond this point requires additional switching capacity and potentially network segmentation.

For organisations with established IT functions

Organisations with dedicated network staff can implement more sophisticated architectures. Dynamic routing protocols enable automatic failover. Network monitoring provides visibility into utilisation and performance. Formal change management governs infrastructure modifications.

Standardise on reference architectures for each site type. A documented branch office architecture specifying equipment models, configuration templates, VLAN structure, and IP addressing conventions enables consistent deployments and reduces design effort for new sites. Reference architectures adapt to local constraints but maintain structural consistency.

Invest in network automation for repetitive tasks. Configuration templating generates device configurations from parameters. Automated backup systems capture configurations on schedule and on change. Network verification tools compare running configurations against documented standards. Automation reduces human error and frees engineering time for design and improvement work.

Field deployment considerations

Field networks operate under constraints that require architectural adaptation. Power instability demands UPS protection and equipment selection for graceful handling of power cycles. Environmental factors require equipment rated for temperature and humidity extremes or environmental controls (air conditioning, dust filtering) that add power and maintenance requirements.

Connectivity constraints shape architecture fundamentally. A field site with 2 Mbps satellite connectivity cannot operate the same applications as headquarters with gigabit fibre. Architecture must account for caching strategies, compression, and application selection appropriate to available bandwidth. Design assumes connectivity will degrade or fail; offline capability and graceful degradation are requirements rather than enhancements.

Remote support models recognise that local troubleshooting capacity is limited. Equipment standardisation enables remote staff to guide non-technical personnel through basic procedures. Out-of-band management (mobile connection to router management interface) enables remote access when primary connectivity fails. Pre-positioned spare equipment enables rapid restoration without international shipping delays.

See also