DNS and DHCP
The Domain Name System translates human-readable names into IP addresses, while the Dynamic Host Configuration Protocol automates IP address assignment to network devices. These two services form the foundation of network operations: DNS failures prevent users from reaching any service by name, and DHCP failures prevent new devices from joining the network. For organisations operating across headquarters, regional offices, and field locations, DNS and DHCP architecture must balance centralised management with local resilience.
- DNS (Domain Name System)
- Hierarchical naming system that maps domain names to IP addresses and other resource records. Operates through a distributed database of authoritative servers and caching resolvers.
- DHCP (Dynamic Host Configuration Protocol)
- Network protocol that automatically assigns IP addresses and configuration parameters to devices joining a network. Eliminates manual IP configuration while ensuring address uniqueness.
- Authoritative DNS server
- Server that holds the official records for a domain zone and responds definitively to queries about that zone.
- Recursive resolver
- DNS server that receives queries from clients and resolves them by traversing the DNS hierarchy, caching results for subsequent queries.
- DHCP scope
- Range of IP addresses that a DHCP server can assign, along with associated configuration parameters such as gateway, DNS servers, and lease duration.
DNS architecture principles
DNS architecture for distributed organisations separates authoritative services from resolution services. Authoritative servers hold the official records for domains the organisation controls. Resolution services answer queries from internal clients by consulting authoritative servers and caching results. This separation allows independent scaling and distinct security policies for each function.
The resolution path for a client query illustrates this architecture. When a user’s workstation queries grants.example.org, the request travels to a configured resolver. If the resolver has a cached answer from a previous query, it returns immediately. Otherwise, the resolver contacts authoritative servers, starting from root servers and descending through the DNS hierarchy until it reaches the authoritative server for example.org. That server returns the IP address, which the resolver caches according to the record’s time-to-live value before returning it to the client.
+------------------------------------------------------------------+| CLIENT RESOLUTION PATH |+------------------------------------------------------------------+
Workstation Internal External (Client) Resolver DNS Hierarchy | | | |---(1) Query------------->| | | grants.example.org | | | | | | [Cache check] | | | | | [Cache miss] | | | | | |---(2) Query--------->| | | root servers | | | | | |<--(3) Referral-------| | | .org servers | | | | | |---(4) Query--------->| | | .org servers | | | | | |<--(5) Referral-------| | | example.org NS | | | | | |---(6) Query--------->| | | example.org auth | | | | | |<--(7) Answer---------| | | 192.0.2.50 | | | | | [Cache answer] | | [TTL: 3600s] | | | | |<--(8) Answer-------------| | | 192.0.2.50 | | | | |Figure 1: DNS resolution path from client through recursive resolver to authoritative hierarchy
Caching drives DNS efficiency. A resolver serving 500 users does not make 500 external queries when all users access the same resource within the cache lifetime. With a 1-hour TTL, the first query triggers external resolution while the remaining 499 receive cached answers in under 1 millisecond. This caching behaviour makes DNS remarkably efficient but requires careful TTL selection: values too short increase external query load, while values too long delay propagation of changes.
Internal and external DNS separation
Organisations maintain separate DNS infrastructures for internal and external purposes. External DNS serves the public internet, resolving names for web sites, mail servers, and other publicly accessible services. Internal DNS serves organisational users, resolving names for internal systems that should not be visible externally.
External DNS records live in public zones hosted by authoritative servers accessible from the internet. A typical external zone for example.org contains records for the public website, mail exchange servers, SPF records for email authentication, and other publicly required information. These records use public IP addresses and reveal only services intended for external access.
Internal DNS operates entirely within the organisation’s network boundary. Internal zones contain records for servers, printers, internal applications, and infrastructure systems. A workstation querying fileserver.internal.example.org receives the server’s private IP address, which functions only within the internal network. External attackers cannot resolve these names because the internal DNS servers refuse queries from outside the network.
The separation prevents information disclosure. Publishing internal server names and IP addresses in public DNS reveals network topology to potential attackers. Internal DNS keeps this information private while still providing name resolution for authorised users. The architectural boundary between internal and external DNS should be absolute: internal resolvers never forward queries for internal zones to external servers.
+------------------------------------------------------------------+| DNS ZONE SEPARATION |+------------------------------------------------------------------+| || EXTERNAL DNS INTERNAL DNS || (Public Internet) (Organisation Network) || || +-------------------+ +-------------------+ || | example.org | | internal.example | || | (public zone) | | .org | || | | | (private zone) | || | www A 203.0 | | | || | .113.10 | | fileserver A 10. | || | mail A 203.0 | | 0.1.50 | || | .113.20 | | printer01 A 10. | || | @ MX mail. | | 0.1.60 | || | example | | grants A 10. | || | .org | | 0.2.10 | || +-------------------+ | hrportal A 10. | || | 0.2.20 | || Hosted: Cloud DNS +-------------------+ || or registrar || Hosted: Internal || resolvers only || |+------------------------------------------------------------------+Figure 2: Separation between public external DNS and private internal DNS zones
Split-horizon DNS
Split-horizon DNS returns different answers for the same query depending on the source of the request. When an internal user queries portal.example.org, they receive the internal IP address 10.0.2.10. When an external user queries the same name, they receive the public IP address 203.0.113.30. This pattern allows a single name to function both internally and externally without requiring users to know different URLs.
The mechanism relies on separate views of the same zone. Internal resolvers hold a view containing private addresses for services accessible from within the network. External authoritative servers hold a view containing public addresses for the same names. The query source determines which view responds.
Split-horizon introduces operational complexity. Changes must propagate to both views, and inconsistency between views causes confusion when users move between internal and external networks. A user accessing portal.example.org from the office expects the same experience as when accessing from home, but the underlying IP addresses differ. Testing must verify both views remain synchronised when records change.
The pattern proves valuable for organisations with services accessible both internally and externally. Without split-horizon, users would need to remember different URLs (portal.example.org externally, portal.internal.example.org internally) or the organisation would need to route all internal traffic through external firewalls. Split-horizon avoids both problems by presenting a unified namespace while routing traffic appropriately.
+------------------------------------------------------------------+| SPLIT-HORIZON DNS |+------------------------------------------------------------------+| || portal.example.org || | || +-----------------+-----------------+ || | | || v v || +---------------+ +---------------+ || | Internal View | | External View | || | | | | || | portal A | | portal A | || | 10.0.2.10 | | 203.0.113.30| || +-------+-------+ +-------+-------+ || | | || v v || +---------------+ +---------------+ || | Internal | | External | || | Users | | Users | || | | | | || | Traffic stays | | Traffic | || | on internal | | enters via | || | network | | firewall/LB | || +---------------+ +---------------+ || |+------------------------------------------------------------------+Figure 3: Split-horizon DNS returning different addresses based on query source
DNS security
DNS security encompasses protecting resolution integrity, preventing information leakage, and filtering malicious content. Each concern requires distinct mechanisms operating at different points in the resolution chain.
DNSSEC (DNS Security Extensions) cryptographically signs DNS records, allowing resolvers to verify that responses have not been tampered with during transit. Without DNSSEC, an attacker positioned between the resolver and authoritative server can substitute forged responses, directing users to malicious servers. DNSSEC prevents this by chaining cryptographic signatures from the root zone through each delegation. A resolver verifying a signed response can confirm the authoritative server created it and no modification occurred.
DNSSEC deployment requires signing zones with cryptographic keys and publishing the corresponding public keys in parent zones. The example.org zone administrator generates a key pair, signs all records in the zone, and publishes a DS (Delegation Signer) record in the .org zone pointing to their key. Resolvers following the signature chain from the root can then validate responses. Key rotation requires publishing new keys before retiring old ones to maintain continuity of validation.
DNS over HTTPS (DoH) and DNS over TLS (DoT) encrypt DNS queries between clients and resolvers. Traditional DNS uses unencrypted UDP packets, allowing network observers to monitor which domains users resolve. DoH wraps DNS queries in HTTPS connections on port 443, making them indistinguishable from web traffic. DoT uses TLS encryption on a dedicated port (853). Both protocols prevent eavesdropping on DNS queries and responses.
Encrypted DNS creates operational tension. Enterprise security architectures rely on inspecting DNS queries to detect malware communication, enforce acceptable use policies, and log security-relevant events. When clients use DoH to external resolvers like Cloudflare (1.1.1.1) or Google (8.8.8.8), these queries bypass internal monitoring. Organisations must configure clients to use internal DoH/DoT resolvers or disable client-level encrypted DNS to maintain visibility.
DNS filtering blocks resolution of known malicious or policy-violating domains. When a client queries a domain associated with malware command-and-control infrastructure, the filtering resolver returns a block response instead of the actual IP address. The query never reaches the malicious server because the client receives either no answer, an NXDOMAIN response indicating the domain does not exist, or a redirect to an internal block page.
Filtering operates through domain blocklists maintained by security vendors or community projects. Resolvers check each query against the blocklist before forwarding to authoritative servers. Response policy zones (RPZ) provide a standardised mechanism for implementing filtering rules that resolvers apply during resolution.
DHCP architecture
DHCP automates IP address assignment through a four-phase process: discovery, offer, request, and acknowledgement. A client joining the network broadcasts a discovery message seeking DHCP servers. Servers respond with offers containing available IP addresses. The client requests its preferred offer, and the server acknowledges, completing the lease. This exchange, known as DORA (Discover, Offer, Request, Acknowledge), takes 2-4 network round trips under normal conditions.
+------------------------------------------------------------------+| DHCP DORA EXCHANGE |+------------------------------------------------------------------+
Client DHCP Server | | |---(1) DHCP DISCOVER (broadcast)-------------------->| | "I need an IP address" | | Client MAC: aa:bb:cc:dd:ee:ff | | | |<--(2) DHCP OFFER------------------------------------| | "Here's 10.0.1.100 available" | | Lease time: 86400 seconds | | Gateway: 10.0.1.1 | | DNS: 10.0.1.10, 10.0.1.11 | | | |---(3) DHCP REQUEST (broadcast)--------------------->| | "I want 10.0.1.100" | | Client MAC: aa:bb:cc:dd:ee:ff | | | |<--(4) DHCP ACK--------------------------------------| | "Confirmed: 10.0.1.100 is yours" | | Lease time: 86400 seconds | | | | | | [Client configures interface] | | [Lease renewal at 50% (43200s)] | | |Figure 4: DHCP four-phase exchange (DORA) for address acquisition
A DHCP scope defines the address range and configuration parameters for a network segment. A scope for the 10.0.1.0/24 subnet might allocate addresses 10.0.1.100 through 10.0.1.250 for dynamic assignment, reserving 10.0.1.1 through 10.0.1.99 for static infrastructure devices. The scope also specifies the default gateway (10.0.1.1), DNS servers (10.0.1.10, 10.0.1.11), domain name (internal.example.org), and lease duration (86400 seconds for a typical office network).
Lease duration balances address efficiency against resilience. Short leases (4-8 hours) reclaim addresses quickly when devices leave the network but increase DHCP server load and fail faster if the server becomes unreachable. Long leases (7-30 days) reduce server load and tolerate longer outages but delay address reclamation. An office network with stable workstations benefits from 24-hour leases. A guest network with transient devices requires 4-hour leases to prevent address exhaustion.
DHCP reservations provide consistent addresses to specific devices while maintaining DHCP’s configuration management benefits. A reservation binds a client’s MAC address to a specific IP address. When the server recognises the MAC in a discovery message, it offers the reserved address rather than allocating from the dynamic pool. Printers, network devices, and servers that require stable addresses receive reservations, eliminating manual configuration while ensuring address consistency.
DHCP relay and multi-subnet deployment
DHCP broadcasts do not cross router boundaries. A client in the 10.0.2.0/24 subnet cannot receive offers from a DHCP server in the 10.0.1.0/24 subnet because routers do not forward broadcast traffic. DHCP relay (also called IP helper) solves this by converting broadcasts to unicast packets directed at a specific DHCP server.
The relay agent, configured on routers at each subnet, listens for DHCP broadcasts. When it receives a discovery message, it adds information about the originating subnet (the GIADDR field) and forwards the request as a unicast packet to the configured DHCP server. The server uses the GIADDR to select the appropriate scope and returns an offer to the relay agent, which broadcasts it on the client’s subnet.
This architecture centralises DHCP management while serving multiple subnets. A single DHCP server (or highly available pair) maintains scopes for all organisational subnets. Each router runs a relay agent pointing to the central server. Adding a new subnet requires configuring a scope on the central server and a relay agent on the local router, without deploying additional DHCP infrastructure.
For organisations with 50 or more subnets, centralised DHCP with relay agents reduces operational burden substantially compared to distributed DHCP servers. The central server maintains a single source of truth for IP allocation across the organisation, simplifies audit and compliance reporting, and eliminates the risk of inconsistent configuration across multiple DHCP instances.
High availability
DNS and DHCP availability directly impacts user productivity. When DNS fails, users cannot access any service by name, even if the underlying services function normally. When DHCP fails, devices with expiring leases or new devices cannot obtain network configuration. High availability designs eliminate single points of failure in both services.
DNS high availability requires multiple resolvers and multiple authoritative servers. Clients configure two or more resolver addresses, failing over automatically if the primary becomes unreachable. Resolver implementations typically query the primary server first, switching to secondary servers after 1-3 second timeouts. For authoritative DNS, multiple NS records point to servers in different locations, and resolvers query any available server.
DHCP high availability follows two patterns: active-passive failover and split scope. In active-passive failover, a primary server handles all requests while a secondary server monitors the primary’s health. If the primary fails, the secondary assumes responsibility, typically within 30-60 seconds. The servers synchronise lease databases to ensure the secondary knows which addresses are already assigned.
Split scope divides the address pool between two independent servers. Each server owns a portion of the available addresses and operates independently. A scope covering 10.0.1.100-10.0.1.250 might assign 10.0.1.100-10.0.1.175 to the primary server and 10.0.1.176-10.0.1.250 to the secondary. Both servers respond to discoveries, and clients accept whichever offer arrives first. This approach requires no synchronisation but sacrifices some address efficiency because each server can only allocate from its portion.
For most organisations, active-passive failover provides better address utilisation and simpler operations. Split scope serves environments where server-to-server communication is unreliable or where the complexity of lease synchronisation is unacceptable.
IPAM integration
IP Address Management (IPAM) systems provide centralised visibility and control over IP address allocation across DNS and DHCP. Without IPAM, administrators track allocations through spreadsheets, DHCP server logs, and institutional memory. This approach fails at scale: addresses get assigned to multiple purposes, documentation drifts from reality, and troubleshooting requires checking multiple sources.
IPAM systems maintain authoritative records of all IP allocations, whether assigned statically, through DHCP, or reserved for future use. Integration with DHCP servers allows IPAM to reflect dynamic assignments in real time. Integration with DNS enables automated record creation when DHCP assigns an address, ensuring forward and reverse DNS records match DHCP leases.
Dynamic DNS (DDNS) updates create DNS records automatically when DHCP assigns addresses. When a client named workstation42 receives address 10.0.1.105, the DHCP server instructs the DNS server to create an A record mapping workstation42.internal.example.org to 10.0.1.105 and a PTR record mapping 10.0.1.105 to workstation42.internal.example.org. When the lease expires or the client releases the address, the corresponding DNS records are removed.
DDNS requires security measures to prevent malicious clients from registering arbitrary DNS records. DHCP servers authenticate to DNS servers using TSIG (Transaction Signature) keys, proving they are authorised to update records. Clients cannot register records directly; only the DHCP server performs updates. This prevents a compromised device from registering DNS records pointing to itself under a trusted name.
| IPAM capability | Operational benefit |
|---|---|
| Centralised allocation tracking | Single source of truth eliminates conflicting assignments |
| DHCP lease visibility | Real-time view of dynamic address usage |
| DNS record automation | Eliminates manual record management for dynamic hosts |
| Subnet utilisation reporting | Identifies exhaustion before it causes outages |
| Audit trail | Records who allocated what and when |
| Conflict detection | Prevents duplicate assignments across systems |
Cloud DNS integration
Cloud platforms provide managed DNS services that eliminate operational burden for authoritative hosting and resolution. Amazon Route 53, Azure DNS, Google Cloud DNS, and Cloudflare each offer authoritative DNS with global anycast distribution, meaning queries reach the nearest point of presence rather than travelling to a single location. Managed DNS services handle scaling, redundancy, and geographic distribution without requiring organisational infrastructure.
Hybrid DNS architectures integrate cloud and on-premises DNS. A common pattern hosts external authoritative DNS in a cloud service while maintaining internal DNS on-premises. The external zone in Route 53 or Azure DNS contains public records, benefiting from global anycast and high availability. Internal zones remain on organisational resolvers, keeping private records within the network boundary.
Cloud resolver services such as Cloudflare Gateway, Cisco Umbrella, and Google Cloud DNS provide DNS resolution with integrated security filtering. These services operate as recursive resolvers that the organisation configures as upstream forwarders. Internal resolvers send queries to the cloud service, which applies filtering policies before resolving and returning answers. This approach adds DNS-layer security without requiring on-premises filtering infrastructure.
Conditional forwarding directs queries for specific domains to designated servers. An organisation using Azure for cloud workloads configures its internal resolvers to forward queries for *.azure.internal to Azure’s private DNS resolvers. Queries for all other domains follow normal resolution paths. This pattern enables seamless resolution of cloud-hosted private endpoints without exposing them to general DNS infrastructure.
Field office deployment
Field offices present unique DNS and DHCP challenges. Intermittent connectivity to headquarters disrupts centralised services. Latency over satellite links (600-800ms round trip for GEO satellites) makes every DNS query noticeable. Limited infrastructure capacity rules out dedicated servers for small sites.
Local caching resolvers address latency concerns. A caching resolver at the field office answers repeated queries from cache without traversing the WAN link. The first query for mail.example.org incurs the full latency to resolve, but subsequent queries return in under 1ms from local cache. With appropriate TTL values, 90% or more of queries can be served from cache during normal operations.
For sites with fewer than 30 devices, a single multi-function device can provide DNS caching, DHCP, and routing. OpenWrt-based routers or lightweight Linux servers running dnsmasq handle both services with minimal resources. This approach suits locations where dedicated infrastructure cannot be justified.
Larger field offices benefit from local DNS and DHCP servers that operate independently during WAN outages. These servers hold copies of internal zones (obtained through zone transfers from headquarters servers) and serve local DHCP scopes. During connectivity loss, workstations continue resolving internal names and obtaining addresses. When connectivity restores, zone transfers synchronise any changes from headquarters.
Lease duration at field offices should exceed typical WAN outage duration. If satellite links fail for 24 hours during weather events, devices with 8-hour leases lose network configuration before connectivity returns. Lease durations of 72-168 hours (3-7 days) at field offices tolerate extended outages while still reclaiming addresses within reasonable timeframes.
| Site size | DNS approach | DHCP approach |
|---|---|---|
| 1-10 devices | Router-based caching (dnsmasq) | Router-based DHCP |
| 10-30 devices | Lightweight server (Pi-hole, dnsmasq) | Same server, single scope |
| 30-100 devices | Dedicated DNS server with zone transfer | Dedicated DHCP with failover |
| 100+ devices | Full DNS infrastructure, multiple servers | HA DHCP pair |
Technology options
DNS and DHCP infrastructure can use open source software, commercial products, or cloud services. The choice depends on operational capacity, feature requirements, and integration needs.
BIND (Berkeley Internet Name Domain) is the reference implementation for authoritative and recursive DNS. It runs on any Unix-like system and handles the most complex DNS configurations. BIND requires significant expertise to configure and maintain securely. Organisations with dedicated DNS administrators and complex requirements benefit from BIND’s flexibility. Those without DNS specialists find it challenging to operate.
Unbound provides recursive resolution with a focus on security and performance. It validates DNSSEC by default, supports DoH and DoT, and operates efficiently on minimal hardware. Unbound does not serve authoritative zones, so it pairs with BIND, PowerDNS, or NSD for complete DNS infrastructure. For organisations prioritising resolver security over authoritative hosting complexity, Unbound offers a focused solution.
PowerDNS separates authoritative and recursive functions into distinct products. PowerDNS Authoritative Server supports database backends (MySQL, PostgreSQL) that simplify integration with provisioning systems. PowerDNS Recursor handles resolution with modern security features. The database-backed architecture suits organisations that want to manage DNS records through applications rather than zone files.
Dnsmasq combines DNS forwarding, caching, and DHCP in a lightweight daemon suitable for small deployments. A single dnsmasq instance on a Raspberry Pi can serve DNS and DHCP for 50 devices. Dnsmasq lacks features required for large or complex environments but excels where simplicity and minimal resource consumption matter.
ISC DHCP (also known as dhcpd) has been the standard open source DHCP server for decades. It supports complex configurations including failover, conditional options, and extensive customisation. ISC announced end-of-life for ISC DHCP in 2022, with maintenance ending in 2025. New deployments should consider alternatives.
Kea is ISC’s replacement for ISC DHCP. It stores leases in databases (MySQL, PostgreSQL, or Cassandra) rather than files, supports high-availability hooks, and provides a RESTful API for integration. Kea represents the future of open source DHCP but requires more operational sophistication than its predecessor.
Microsoft DHCP and DNS integrate with Active Directory and provide familiar management interfaces for Windows-centric environments. AD-integrated zones replicate DNS records through AD replication, eliminating zone transfer configuration. For organisations standardised on Windows Server, the built-in services reduce operational complexity.
Cloud-managed DNS services eliminate operational burden for authoritative hosting. Route 53, Azure DNS, Google Cloud DNS, and Cloudflare provide global availability, automatic scaling, and integrated health checking. These services work well for external DNS and for organisations preferring managed services over self-hosted infrastructure.
Implementation considerations
For organisations with limited IT capacity
Start with router-based DNS and DHCP using devices that support dnsmasq or equivalent functionality. Most enterprise-grade routers and many prosumer devices provide adequate DNS caching and DHCP for small offices. This approach requires no additional infrastructure and integrates DNS and DHCP configuration with network management.
Use cloud DNS for external authoritative hosting. Services like Cloudflare (free tier available) or Route 53 eliminate the need to maintain externally accessible DNS infrastructure. Configure the domain registrar to delegate to the cloud DNS provider, then manage records through the provider’s interface or API.
For internal DNS, configure the router’s DNS to forward queries to cloud resolvers with filtering (Cloudflare Gateway free tier provides basic filtering, or Quad9 provides malware blocking). This adds DNS-layer security without requiring on-premises filtering infrastructure.
Document DHCP scopes and static assignments in a shared spreadsheet if dedicated IPAM is unavailable. This documentation becomes critical when troubleshooting address conflicts or planning network changes.
For organisations with established IT functions
Deploy dedicated DNS and DHCP infrastructure with high availability. Use Unbound or PowerDNS Recursor for internal resolution with DNSSEC validation enabled. Host internal authoritative zones on PowerDNS or BIND with database backends to enable integration with provisioning systems.
Implement IPAM to maintain authoritative records of all IP allocations. Open source options include NetBox (primarily IPAM and data centre management) and phpIPAM. Commercial options provide additional features for enterprise environments.
Enable DNSSEC signing for external zones. Cloud DNS providers handle key management automatically. Self-hosted authoritative servers require key generation, signing configuration, and DS record publication in the parent zone.
Configure conditional forwarding for cloud-hosted resources. Azure private endpoints, AWS PrivateLink, and GCP Private Service Connect all require DNS configuration to resolve private names. Establish forwarding rules before deploying cloud resources that depend on them.
For field offices, deploy local caching resolvers sized to the site’s needs. Establish zone transfer for internal zones to enable continued resolution during WAN outages. Set DHCP lease durations to exceed expected outage durations.
Integration with existing systems
Active Directory environments should evaluate whether to use AD-integrated DNS or maintain separate DNS infrastructure. AD-integrated DNS simplifies replication for AD-joined systems but may not serve non-Windows systems optimally. Hybrid approaches use AD DNS for AD-specific zones (_msdcs.domain.local, _sites.domain.local) while maintaining separate DNS for general resolution.
Organisations migrating to cloud identity (Entra ID, Okta, Google Workspace) may reduce or eliminate AD DNS requirements. Plan DNS architecture transitions alongside identity platform changes.
SIEM integration should capture DNS query logs for security analysis. High query volumes (thousands per second in large organisations) require log aggregation strategies that balance visibility with storage costs. Sampling or filtering to security-relevant queries may be necessary.