Skip to main content

Last-Mile Connectivity

Last-mile connectivity describes the final network segment between backbone infrastructure and end-user devices at field locations. This segment presents the greatest technical and commercial challenges in field operations because it exists beyond the reach of established telecommunications providers. Where headquarters connects through fibre or enterprise-grade circuits, field offices contend with whatever connectivity reaches their geographic location, often assembling solutions from multiple imperfect options.

The economics of last-mile infrastructure differ fundamentally from urban telecommunications. Population density drives commercial investment in network infrastructure, and field sites operate precisely where that density falls below commercial viability thresholds. A village health clinic 40 kilometres from the nearest town with paved roads will not receive fibre deployment from any commercial carrier because the potential subscriber base cannot justify the capital expenditure. Field connectivity strategy therefore starts from acceptance that ideal options do not exist, and proceeds to combining available technologies into functional architectures.

Last-mile connectivity
The final network segment connecting end-user devices to backbone infrastructure, typically the most constrained and expensive portion of the network path on a per-kilometre basis.
Backhaul
The intermediate network segment carrying aggregated traffic from access points to core network infrastructure. In field contexts, backhaul often represents the primary capacity constraint.
Bandwidth aggregation
Combining multiple network connections to increase total available throughput beyond what any single link provides.
Failover
Automatic transition from a failed primary connection to a backup connection, maintaining service continuity during outages.
WAN optimisation
Technologies that reduce bandwidth consumption and improve application performance over constrained links through caching, compression, deduplication, and protocol optimisation.

Technology characteristics

Last-mile technologies divide into four categories based on their infrastructure requirements: wired, fixed wireless, mobile cellular, and satellite. Each category carries distinct cost structures, performance characteristics, and deployment prerequisites that determine suitability for specific field contexts.

Wired connectivity requires physical cable installation between the provider’s distribution point and the field site. When available, wired connections deliver the most consistent performance because the transmission medium remains stable regardless of weather or atmospheric conditions. Fibre optic connections provide the highest bandwidth with latency under 5ms to the first network hop. DSL connections operate over existing telephone infrastructure with bandwidth ranging from 5 Mbps to 100 Mbps depending on distance from the exchange. Cable connections through coaxial infrastructure achieve 100 Mbps to 1 Gbps but exist only where cable television networks have deployed.

The practical constraint on wired connectivity in field contexts is availability. Installing new cable infrastructure to a field site costs between $15,000 and $50,000 per kilometre depending on terrain and local construction costs. Commercial providers undertake this investment only when the expected subscriber revenue justifies the capital expenditure, which excludes most field locations. Where existing wired infrastructure terminates within 500 metres of a field site, extending the connection becomes feasible. Beyond that distance, other technologies prove more practical.

Fixed wireless uses radio links to bridge the gap between wired infrastructure endpoints and field sites. Point-to-point wireless links require line of sight between directional antennas at each end, spanning distances up to 30 kilometres with appropriate equipment. A 5 GHz link using 30 dBi antennas achieves 100 Mbps to 300 Mbps over 10 kilometres in clear conditions. Licensed spectrum in lower frequency bands penetrates vegetation and minor obstructions, extending range but reducing bandwidth to 20 Mbps to 50 Mbps.

Wireless Internet Service Providers (WISPs) operate in many regions where wired infrastructure remains sparse, using tower-mounted base stations to serve subscribers within a 10 to 25 kilometre radius. WISP service quality varies substantially based on subscriber density, spectrum management, and equipment quality. Monthly costs range from $50 to $300 depending on bandwidth tier and local market conditions.

Mobile cellular connectivity through 3G, 4G, and 5G networks provides the most rapidly deployable option where coverage exists. A mobile router with external antenna begins providing connectivity within minutes of power-on. Coverage maps from mobile operators indicate theoretical availability, but actual signal quality at ground level requires on-site measurement. A location showing 4G coverage on provider maps might receive only marginal signal requiring high-gain external antennas and careful positioning.

Mobile network performance degrades under load. A cell tower serving a town of 5,000 people delivers different per-user throughput than the same tower serving a village of 200. Contention ratios on mobile networks in developing regions commonly reach 50:1 or higher during peak hours, reducing effective throughput to 10% of theoretical maximum. Data costs on mobile networks remain high compared to fixed connections, with per-gigabyte pricing between $2 and $20 depending on region and operator.

Satellite connectivity provides coverage independent of terrestrial infrastructure, making it the fallback option when no other technology reaches the site. Geostationary (GEO) satellites at 36,000 kilometres altitude impose latency of 600ms or more per round trip, affecting interactive applications and TCP performance. Medium Earth Orbit (MEO) and Low Earth Orbit (LEO) constellations reduce latency to 100ms and 30ms respectively, with LEO services achieving performance comparable to cellular networks.

Satellite bandwidth costs exceed terrestrial alternatives by substantial margins. GEO services price data at $3 to $15 per gigabyte for committed information rates, with burstable capacity at premium rates. LEO services offer flat-rate pricing at $100 to $500 per month for 50 Mbps to 200 Mbps, but require equipment investment of $500 to $2,500 and impose fair use policies that throttle heavy users. Detailed technology specifications appear in Satellite Connectivity.

Hybrid connectivity architecture

Single-technology deployments rarely satisfy field site requirements across all operational conditions. Hybrid architectures combine multiple connection types to achieve aggregate bandwidth exceeding any single link, provide resilience against individual connection failures, and optimise costs by directing traffic to the most appropriate path.

The fundamental hybrid architecture connects two or more WAN links through a router or SD-WAN appliance that manages traffic across all available paths. This device maintains awareness of each link’s status, performance characteristics, and cost implications, making forwarding decisions based on configured policies.

+------------------+
| FIELD SITE |
+------------------+
|
+------------+------------+
| |
+-------v-------+ +-------v-------+
| Primary | | Secondary |
| Router/ | | Router |
| SD-WAN | | (standby) |
+-------+-------+ +---------------+
|
+-------------------+-------------------+
| | |
+-----v-----+ +-----v-----+ +-----v-----+
| Mobile | | VSAT | | Fixed |
| 4G LTE | | Terminal | | Wireless |
| | | | | (WISP) |
+-----+-----+ +-----+-----+ +-----+-----+
| | |
v v v
+-----+-----+ +-----+-----+ +-----+-----+
| Cell | | Satellite | | WISP |
| Tower | | Ground | | Tower |
| 15km | | Station | | 8km |
+-----+-----+ +-----+-----+ +-----+-----+
| | |
+-------------------+-------------------+
|
v
+---------+---------+
| INTERNET |
| BACKBONE |
+-------------------+

Figure 1: Three-path hybrid connectivity architecture with primary aggregation device

A field office in South Sudan demonstrates practical hybrid architecture. The site operates 12 kilometres from a town with a cell tower providing 4G coverage. Signal strength at the office location measures -95 dBm, insufficient for reliable indoor coverage but adequate with a roof-mounted directional antenna achieving -75 dBm. This primary connection delivers 15 Mbps average throughput with 80ms latency.

The secondary connection uses a LEO satellite terminal providing 50 Mbps with 40ms latency. Monthly service costs $200 plus $1,500 initial equipment investment. The tertiary connection uses a different mobile operator’s 3G network with lower bandwidth (5 Mbps) but independent infrastructure from the primary cellular link.

Traffic distribution across these paths follows policies configured on the aggregation device:

# Traffic steering policy example
# Business-critical applications use all available links
application-group critical
load-balance round-robin
links mobile-primary satellite wisp
failover-threshold 3-seconds
# Video conferencing prefers low-latency paths
application-group realtime
prefer-link mobile-primary
failover-to satellite
exclude-link wisp # Latency too variable
# Bulk transfers use lowest-cost path
application-group bulk
prefer-link wisp
time-window 22:00-06:00 satellite # Off-peak satellite
failover-to mobile-primary
# Satellite metered - apply fair use
link satellite
monthly-quota 300GB
quota-action deprioritise
exclude-from-bulk-after 250GB

This configuration achieves 65 Mbps aggregate bandwidth under normal conditions (15 + 50 Mbps from the two high-capacity links, with 3G reserved for failover). When the primary cellular link fails during network congestion or tower maintenance, the aggregation device shifts traffic to satellite and 3G backup within 3 seconds, maintaining connectivity for critical applications.

Bandwidth aggregation mechanisms

Aggregation combines capacity from multiple links to exceed single-link bandwidth limitations. Two distinct mechanisms accomplish this: load balancing and bonding. The mechanisms differ in their approach to distributing traffic and in their requirements for endpoint support.

Load balancing distributes connections across available links without modifying the underlying protocols. Each TCP session or UDP flow travels entirely over a single link, with new sessions assigned to whichever link best matches the configured policy. Load balancing requires no special support from remote endpoints because each connection appears as a standard single-path transmission.

The limitation of load balancing emerges when individual sessions require more bandwidth than any single link provides. A video conference consuming 4 Mbps operates successfully when the highest-capacity link provides at least 4 Mbps. If the site’s best link delivers only 2 Mbps, no amount of load balancing across additional 2 Mbps links enables that video session.

Bonding aggregates links at a lower protocol layer, presenting multiple physical connections as a single logical interface with combined capacity. True bonding requires tunnel endpoints at both sides of the aggregated path. The field site router encapsulates traffic and distributes packets across available links. A bonding concentrator at a data centre or cloud location reassembles packets into original order before forwarding to destinations.

+------------------------------------------------------------------+
| FIELD SITE |
+------------------------------------------------------------------+
| |
| +------------------+ |
| | LAN Clients | |
| | 192.168.1.0/24 | |
| +--------+---------+ |
| | |
| +--------v---------+ |
| | Bonding Router | |
| | Bond0: 10.99.0.1| |
| +--------+---------+ |
| | |
| +-----+-----+-----+ |
| | | | |
| +--v--+ +--v--+ +--v--+ |
| |WAN1 | |WAN2 | |WAN3 | |
| |4G | |VSAT | |WISP | |
| |10Mb | |25Mb | |15Mb | |
| +--+--+ +--+--+ +--+--+ |
| | | | |
+------------------------------------------------------------------+
| | |
+-----+-----+--------+ Tunnel packets distributed
| across all three WAN links
v
+------------------------------------------------------------------+
| INTERNET |
+------------------------------------------------------------------+
|
v
+------------------------------------------------------------------+
| BONDING CONCENTRATOR |
| (Cloud or Data Centre) |
+------------------------------------------------------------------+
| |
| +------------------+ |
| | Tunnel Endpoint | |
| | Bond0: 10.99.0.2 | |
| +--------+---------+ |
| | Reassembles packets from all tunnels |
| | Presents as single 50Mbps connection |
| v |
| +--------+---------+ |
| | Internet Gateway | |
| +------------------+ |
| |
+------------------------------------------------------------------+

Figure 2: Bonding architecture with cloud concentrator reassembling distributed packets

The bonded configuration in this example combines 10 Mbps cellular, 25 Mbps satellite, and 15 Mbps fixed wireless into a single 50 Mbps logical interface. Any application, including that 4 Mbps video conference, can consume capacity from all three links simultaneously. Large file transfers utilise the full aggregated bandwidth.

Bonding introduces overhead and latency costs. Tunnel encapsulation adds 40 to 60 bytes per packet. The concentrator must buffer packets to handle different link latencies, adding 50ms to 200ms depending on the latency spread between fastest and slowest links. If the satellite link has 600ms latency while cellular shows 80ms, the concentrator holds cellular packets for 520ms awaiting satellite packets to maintain sequence.

Practical bonding deployments address latency spread through packet scheduling algorithms. The field router sends time-sensitive packets only over low-latency links while distributing bulk transfer packets across all links. This hybrid approach achieves most of the bandwidth benefit while preserving latency-sensitive application performance.

Failover and resilience

Resilience strategies ensure continued operations when individual connections fail. Failover mechanisms detect failures and redirect traffic to surviving links. The speed and reliability of failover depends on detection method sensitivity and pre-established routing alternatives.

Active monitoring sends probe packets across each link at regular intervals. When probes fail consecutively for a configured threshold, the monitoring system declares the link unavailable and triggers failover. Probe destinations should exist beyond the immediate provider network to detect upstream failures. A link that remains technically operational but routes nowhere provides no useful connectivity.

# Probe configuration for failover detection
link mobile-primary
probe-destination 8.8.8.8 # Google DNS
probe-destination 1.1.1.1 # Cloudflare DNS
probe-interval 5-seconds
probe-timeout 2-seconds
failure-threshold 3 # Fail after 3 missed probes
recovery-threshold 5 # Recover after 5 successful probes
link satellite
probe-destination 8.8.4.4
probe-destination 9.9.9.9 # Quad9 DNS
probe-interval 10-seconds # Less frequent - metered link
probe-timeout 5-seconds # Higher timeout for latency
failure-threshold 2
recovery-threshold 3

Conservative failure thresholds prevent false failovers from transient packet loss. A threshold of 3 failures at 5-second intervals declares failure after 15 seconds of complete outage. Aggressive thresholds risk oscillation between links during marginal conditions, disrupting established sessions.

Preemptive failover triggers before complete failure based on degradation indicators. Rising latency, increasing packet loss, or declining throughput indicate impending problems. A cellular link showing latency increase from 80ms to 400ms and packet loss rising from 0.1% to 5% suggests network congestion that will worsen. Preemptive failover shifts traffic before the link becomes unusable.

link mobile-primary
degrade-latency-threshold 200ms # Begin shifting at 200ms
degrade-loss-threshold 2% # Or 2% packet loss
degrade-action reduce-weight 50% # Shift half traffic away
critical-latency-threshold 500ms # Full failover at 500ms
critical-loss-threshold 10% # Or 10% packet loss
critical-action failover # Move all traffic

Recovery after failover requires equal attention. When a failed link returns to service, immediate traffic return risks session disruption if the link fails again. Gradual recovery shifts traffic incrementally, validating link stability before full restoration.

+------------------------------------------------------------------+
| FAILOVER STATE MACHINE |
+------------------------------------------------------------------+
+------------------+
| |
| HEALTHY |<---------------------------------+
| | |
+--------+---------+ |
| |
| Probe failures exceed threshold |
v |
+--------+---------+ |
| | |
| DEGRADED | Recovery probes succeed |
| +----------------------------------+
+--------+---------+
|
| Degradation continues
| or complete failure
v
+--------+---------+
| |
| FAILED |
| |
+--------+---------+
|
| Probes begin succeeding
v
+--------+---------+
| |
| RECOVERING +------+
| | |
+--------+---------+ |
| | Probes fail during
| Sustained | recovery period
| success |
v v
+--------+---------+ +---+-------------+
| | | |
| HEALTHY | | FAILED |
| (restored) | | (relapse) |
| | | |
+------------------+ +-----------------+

Figure 3: Link state transitions for failover and recovery

Local caching and optimisation

Bandwidth constraints make efficient use of available capacity essential. Caching stores frequently accessed content locally, eliminating repeated transfers of identical data. WAN optimisation reduces bandwidth consumption for traffic that cannot be cached.

Forward proxy caching intercepts HTTP and HTTPS requests, storing responses for subsequent requests. When multiple users access the same web resource, the cache serves subsequent requests from local storage. Transparent proxying intercepts traffic without client configuration. Explicit proxying requires client configuration but enables HTTPS interception through trusted certificate injection.

Cache effectiveness depends on content cacheability. Static resources like software updates, JavaScript libraries, and images cache well with hit rates exceeding 80% for repeated access patterns. Dynamic content with user-specific responses does not cache. Modern web applications with heavy API usage show lower cache hit rates, typically 20% to 40%, than traditional websites.

A field office with 50 users accessing similar resources achieves substantial bandwidth savings through caching. If daily web traffic totals 10 GB without caching and the cache achieves 35% hit rate, actual WAN consumption drops to 6.5 GB. Over 30 days, the 105 GB savings equals meaningful cost reduction on metered satellite connections.

+------------------------------------------------------------------+
| CACHING ARCHITECTURE |
+------------------------------------------------------------------+
+------------------+
| Client |
| Devices |
+--------+---------+
|
| HTTP/HTTPS requests
v
+--------+---------+
| |
| Transparent |
| Proxy |
| (intercept) |
| |
+--------+---------+
|
+--------+--------+
| |
v v
+----+----+ +-----+-----+
| Cache | | Cache |
| HIT | | MISS |
+---------+ +-----+-----+
| |
| Serve from | Forward to origin
| local storage | via WAN link
v v
+----+----+ +-----+-----+
| Client | | WAN |
| Response| | Router |
+---------+ +-----+-----+
|
v
+-------+-------+
| Internet |
+-------+-------+
|
v
+-------+-------+
| Origin Server |
+---------------+

Figure 4: Transparent caching proxy intercepting client requests

WAN optimisation applies to traffic that cannot cache. Deduplication identifies repeated byte sequences across all traffic, storing them once and transmitting references for subsequent occurrences. A user sending the same document attachment to multiple recipients triggers full transmission once, with subsequent transfers sending only the reference.

Protocol optimisation addresses TCP performance degradation over high-latency links. Standard TCP congestion control algorithms interpret latency as congestion, reducing transmission rates on satellite links even when capacity remains available. WAN optimisation appliances terminate TCP locally and use optimised protocols across the WAN, improving throughput by 3x to 10x on high-latency paths.

Compression reduces data size before transmission. Text-based content compresses 60% to 80%. Already-compressed content like images and video shows minimal additional compression. Real-time compression adds processing latency of 5ms to 20ms depending on algorithm and hardware.

Community and shared connectivity

Resource pooling reduces per-site costs by sharing infrastructure across multiple organisations or community members. Shared connectivity models extend from informal arrangements to structured cooperatives with defined governance.

Shared backhaul connects multiple sites to a common upstream provider. A high-capacity connection to regional infrastructure serves as backhaul for last-mile links to individual sites. A VSAT terminal providing 50 Mbps might backhaul wireless links to five field sites within 15 kilometres, with each site receiving 10 Mbps allocation.

The economics favour sharing when individual site requirements fall below the minimum viable connection from upstream providers. Satellite terminals and licensed wireless links impose fixed costs regardless of utilisation. A $500 monthly satellite service costs the same whether utilised at 10% or 90% of capacity. Five organisations sharing that terminal pay $100 each for connectivity that none could afford individually.

Governance structures for shared connectivity must address capacity allocation, cost sharing, and dispute resolution. Static allocation reserves fixed bandwidth per participant regardless of usage patterns. Dynamic allocation shares capacity based on real-time demand, improving aggregate utilisation but requiring fair queuing to prevent one participant’s traffic from starving others.

# Shared bandwidth allocation configuration
participant org-alpha
guaranteed-bandwidth 5Mbps # Always available
burst-bandwidth 15Mbps # Available when capacity exists
cost-share 25%
participant org-beta
guaranteed-bandwidth 8Mbps
burst-bandwidth 20Mbps
cost-share 35%
participant org-gamma
guaranteed-bandwidth 3Mbps
burst-bandwidth 10Mbps
cost-share 15%
participant community-centre
guaranteed-bandwidth 4Mbps
burst-bandwidth 10Mbps
cost-share 25%
# Fair queuing when demand exceeds capacity
congestion-policy weighted-fair-queue
weight-by guaranteed-bandwidth

Community networks extend connectivity to populations beyond organisational sites. A field office establishing connectivity for its operations might extend access to surrounding community members, improving community relations and providing public benefit. Regulatory requirements in some jurisdictions prohibit reselling connectivity or operating as an unlicensed telecommunications provider. Legal review should precede community network establishment.

Technical architectures for community access separate organisational traffic from public traffic through network segmentation. Captive portals manage public access with usage limits preventing abuse. Quality of service policies protect organisational traffic from community congestion.

Cost-benefit evaluation

Connectivity decisions balance performance requirements against budget constraints across multiple dimensions: initial deployment cost, ongoing operational cost, and indirect costs from inadequate connectivity.

Deployment costs include equipment, installation, and any construction or infrastructure development. A fixed wireless link requires antennas at both endpoints, mounting hardware, cabling, and potentially tower construction. Total deployment for a 10-kilometre point-to-point link ranges from $2,000 for unlicensed equipment self-installed to $15,000 for licensed spectrum equipment with professional installation and tower work.

TechnologyEquipment CostInstallation CostTotal Deployment
Mobile router with external antenna$200-$800$100-$500$300-$1,300
Fixed wireless (unlicensed)$400-$1,500$500-$2,000$900-$3,500
Fixed wireless (licensed)$2,000-$8,000$2,000-$5,000$4,000-$13,000
VSAT terminal (GEO)$1,500-$5,000$500-$1,500$2,000-$6,500
LEO terminal$500-$2,500$200-$500$700-$3,000
Wired extension (per 100m)$500-$2,000$1,000-$3,000$1,500-$5,000

Operational costs recur monthly and include service fees, data charges, and maintenance. Metered services like cellular data and GEO satellite incur costs proportional to usage. Flat-rate services provide predictable costs but impose fair use policies limiting heavy usage.

TechnologyMonthly ServiceData CostsTypical Monthly Total
Mobile data (20GB)$0-$50$2-$20/GB$40-$450
Fixed wireless (WISP)$50-$300Included$50-$300
VSAT (GEO, 10GB CIR)$200-$800$5-$15/GB excess$200-$1,000
LEO satellite$100-$500Included (fair use)$100-$500

Indirect costs from inadequate connectivity manifest as productivity loss, delayed communications, and inability to use cloud services. Quantifying these costs requires understanding operational impact. A programme team unable to submit reports on time due to connectivity limitations incurs delayed funding disbursement. Field staff spending 30 minutes daily waiting for systems to respond across slow links lose 10 hours monthly of productive time.

A worked example illustrates total cost comparison. A field office requires 20 Mbps aggregate bandwidth with high availability for 15 staff over a 24-month deployment:

Option A: Dual mobile connections

  • Equipment: 2 × $500 routers with antennas = $1,000
  • Monthly service: 2 × $150 = $300
  • 24-month total: $1,000 + ($300 × 24) = $8,200
  • Risk: Cellular coverage may degrade; single point of failure at tower

Option B: Mobile primary with LEO backup

  • Equipment: $500 router + $1,500 LEO terminal = $2,000
  • Monthly service: $150 mobile + $200 satellite = $350
  • 24-month total: $2,000 + ($350 × 24) = $10,400
  • Benefit: Independent infrastructure paths; satellite works during tower outages

Option C: Fixed wireless primary with mobile backup

  • Equipment: $2,500 wireless + $500 mobile = $3,000
  • Installation: $1,500 (includes site survey and mounting)
  • Monthly service: $150 WISP + $100 mobile = $250
  • 24-month total: $4,500 + ($250 × 24) = $10,500
  • Benefit: Highest primary bandwidth; lowest ongoing cost

Option B and C show similar total costs but different characteristics. Option B deploys faster with lower upfront investment and works regardless of WISP availability. Option C provides higher sustained bandwidth and lower operational costs but requires WISP coverage within line of sight.

Local provider engagement

Establishing connectivity in field locations requires engaging with local telecommunications providers, internet service providers, and sometimes community organisations controlling shared infrastructure. These relationships influence service quality, pricing, and support responsiveness beyond what formal contracts specify.

Provider assessment begins before deployment. Understanding the local telecommunications landscape identifies all available options, not just the most visible. Mobile operators advertising urban coverage may have undocumented rural infrastructure. Small WISPs serving specific areas may not appear in general searches but offer competitive service where they operate.

Technical assessment validates provider claims. Coverage maps indicate theoretical reach but not actual performance at specific locations. A site survey with test equipment confirms signal strength, measures achievable throughput, and identifies interference or congestion patterns. A provider claiming 20 Mbps service delivers differently during morning hours versus evening peak.

Service level discussions establish expectations before commitment. Formal SLAs from major providers rarely extend to rural service areas. Informal understandings about response times, escalation paths, and service credits matter more than contract language that excludes the deployment location from coverage guarantees.

Relationship maintenance throughout the deployment improves incident response. Knowing the local technician by name and maintaining regular contact ensures faster attention when problems occur. Provider staff prioritise customers they know over anonymous service tickets.

Payment arrangements require attention in regions with limited banking infrastructure. Providers may require payment in local currency through local channels. Budget processes expecting monthly invoicing and 30-day payment terms may not accommodate providers requiring prepayment or cash transactions.

Site type deployment patterns

Different field site types present distinct connectivity requirements based on operational characteristics, staff count, criticality, and expected deployment duration.

Permanent country or regional offices justify higher infrastructure investment given multi-year operational horizons. Wired or fixed wireless primary connections with independent backup paths provide the reliability expected for 50+ staff and critical operations. Investment of $10,000 to $25,000 in connectivity infrastructure amortises over 5+ years to reasonable annual costs.

Semi-permanent field offices operating for 1 to 3 years balance infrastructure investment against deployment duration. Relocatable equipment enables recovery of investment when operations move. Mobile and satellite connectivity requiring minimal fixed installation suit these deployments. Investment should recover if the office closes after 18 months.

Temporary response sites during emergencies require rapid deployment over operational refinement. Satellite terminals and mobile routers deploy in hours without site surveys or infrastructure installation. Higher operational costs for immediately available solutions outweigh the delays of optimising infrastructure for short deployments.

Mobile operations including health clinics, distribution teams, and assessment missions require connectivity that travels with the team. Vehicle-mounted terminals, portable satellite units, and mobile hotspots provide connectivity wherever the team operates. Battery or vehicle power eliminates dependency on site infrastructure.

Site TypeTypical DurationPrimary TechnologyBackup TechnologyInvestment Level
Country office5+ yearsFibre/fixed wirelessMobile + satellite$15,000-$30,000
Regional hub3-5 yearsFixed wirelessDual mobile$8,000-$15,000
Field office1-3 yearsMobile/WISPSatellite$3,000-$8,000
Response site1-12 monthsSatelliteMobile$2,000-$5,000
Mobile teamOngoingMobile/portable satelliteCellular backup$1,000-$3,000

Implementation considerations

Organisations with minimal IT capacity should prioritise solutions with low operational complexity. Mobile connectivity with automatic failover to satellite backup provides resilience without requiring networking expertise for ongoing management. Managed services from satellite providers or regional ISPs shift operational burden to the provider at premium cost but reduced internal effort.

Single-person IT functions balance multiple sites and cannot provide on-site support for remote connectivity issues. Remote management capability becomes essential. Equipment should support remote access for diagnostics and configuration changes. Cellular-connected management planes enable remote access even when primary connectivity fails.

Established IT functions can implement more sophisticated architectures including bonding concentrators, WAN optimisation appliances, and detailed traffic engineering. The operational overhead of managing these systems requires dedicated network administration time.

Organisations operating in hostile environments must consider physical security of connectivity equipment. Antennas and terminals mounted externally present visible targets. Satellite terminals particularly identify sites as having resources worth protecting. Discrete installation and security measures for external equipment merit attention during site planning.

Power availability constrains technology selection. Satellite terminals consume 40W to 100W continuously. Networking equipment adds 20W to 50W. Total connectivity infrastructure power budget of 100W to 200W requires solar installations of 400W to 800W panel capacity with battery storage for overnight and cloudy conditions. Power planning must precede or accompany connectivity planning. See Solar and Off-Grid Power for sizing calculations.

See also