Virtualisation
Virtualisation creates abstracted computing environments that operate independently of underlying physical hardware. A single physical server running virtualisation software hosts multiple isolated operating systems, each believing it controls dedicated hardware. This abstraction transforms capital-intensive physical infrastructure into flexible, software-defined resources that can be provisioned, moved, and recovered without hardware changes.
- Hypervisor
- Software that creates and manages virtual machines by mediating access between guest operating systems and physical hardware. The hypervisor allocates CPU cycles, memory, storage, and network access to each virtual machine according to configured limits.
- Virtual machine (VM)
- A software-based computer that runs an operating system and applications in isolation from other virtual machines on the same physical host. Each VM has virtualised CPU, memory, storage, and network interfaces.
- Guest operating system
- The operating system installed within a virtual machine, which interacts with virtualised hardware presented by the hypervisor rather than physical components.
- Host
- The physical server running hypervisor software. A host provides the CPU, memory, storage, and network resources that the hypervisor divides among virtual machines.
Hypervisor architecture
Hypervisors fall into two architectural categories distinguished by their relationship to the underlying operating system. Type 1 hypervisors run directly on physical hardware without an intervening operating system. The hypervisor itself functions as a minimal operating system whose sole purpose is managing virtual machines. Type 1 hypervisors access hardware through their own drivers and control the physical CPU scheduler, memory manager, and device interfaces. This direct hardware access yields lower overhead and higher performance than Type 2 alternatives.
Type 2 hypervisors run as applications within a conventional operating system. The host operating system manages hardware, and the hypervisor requests resources through standard operating system interfaces. Virtual machines therefore experience two layers of abstraction: the hypervisor virtualises hardware, and the host operating system mediates actual hardware access. This additional layer introduces performance overhead but simplifies deployment because the hypervisor installs like any other application.
+------------------------------------------------------------------+| TYPE 1 HYPERVISOR |+------------------------------------------------------------------+| || +------------+ +------------+ +------------+ +------------+ || | VM 1 | | VM 2 | | VM 3 | | VM 4 | || | (Windows) | | (Ubuntu) | | (RHEL) | | (Debian) | || +------------+ +------------+ +------------+ +------------+ || | | | | |+--------+--------------+--------------+--------------+------------+| || HYPERVISOR || (KVM, Proxmox VE, VMware ESXi, Hyper-V) || |+------------------------------------------------------------------+| || PHYSICAL HARDWARE || (CPU, Memory, Storage, Network) || |+------------------------------------------------------------------+Figure 1: Type 1 hypervisor architecture with direct hardware access
+------------------------------------------------------------------+| TYPE 2 HYPERVISOR |+------------------------------------------------------------------+| || +------------+ +------------+ +------------+ || | VM 1 | | VM 2 | | VM 3 | || | (Ubuntu) | | (Windows) | | (Fedora) | || +------------+ +------------+ +------------+ || | | | |+--------+--------------+--------------+---------------------------+| || HYPERVISOR || (VirtualBox, VMware Workstation) || |+------------------------------------------------------------------+| || HOST OPERATING SYSTEM || (Windows, macOS, Linux) || |+------------------------------------------------------------------+| || PHYSICAL HARDWARE || (CPU, Memory, Storage, Network) || |+------------------------------------------------------------------+Figure 2: Type 2 hypervisor running within a host operating system
The performance difference between types varies by workload. CPU-intensive tasks show 2-5% overhead on Type 1 hypervisors and 5-15% on Type 2. I/O-intensive workloads reveal larger gaps: Type 1 hypervisors with paravirtualised drivers achieve near-native disk and network performance, while Type 2 hypervisors impose 10-30% overhead from the additional operating system layer. For production workloads requiring predictable performance, Type 1 hypervisors are the standard choice. Type 2 hypervisors serve development, testing, and desktop virtualisation where management simplicity outweighs raw performance.
Paravirtualisation improves performance by replacing hardware emulation with a cooperative interface between guest and hypervisor. Rather than the hypervisor emulating physical disk controllers and network cards, paravirtualised guests use drivers that communicate directly with the hypervisor through an optimised channel. Linux guests use virtio drivers; Windows guests require vendor-specific paravirtualised drivers. Paravirtualisation reduces CPU overhead for I/O operations from 15-25% to 2-5%.
Hardware virtualisation support
Modern CPUs include hardware extensions that accelerate virtualisation. Intel VT-x and AMD-V provide hardware support for CPU virtualisation, eliminating the need for binary translation of privileged instructions. These extensions create a new processor mode specifically for hypervisor operation, allowing clean separation between host and guest execution contexts without software emulation overhead.
Extended page tables (Intel EPT) and nested page tables (AMD NPT) accelerate memory virtualisation. Without hardware support, the hypervisor must maintain shadow page tables that translate guest virtual addresses to host physical addresses, intercepting every guest page table modification. Hardware-assisted nested paging allows the CPU to perform two-level address translation directly, reducing memory management overhead from 20-40% to under 5%.
SR-IOV (Single Root I/O Virtualisation) extends hardware acceleration to network and storage devices. An SR-IOV-capable network card presents itself as multiple independent virtual functions, each assignable directly to a virtual machine. The guest accesses its virtual function without hypervisor intervention, achieving network throughput within 2-3% of bare metal. SR-IOV requires compatible hardware, hypervisor support, and driver support in the guest operating system.
Before deploying virtualisation, verify hardware support:
# Linux: Check CPU virtualisation extensionsgrep -E '(vmx|svm)' /proc/cpuinfo
# vmx indicates Intel VT-x# svm indicates AMD-V
# Check if extensions are enabledlscpu | grep Virtualization
# Verify IOMMU support for device passthroughdmesg | grep -i iommuIf these commands return no output, virtualisation extensions are either unsupported or disabled in BIOS/UEFI firmware settings.
Resource allocation
The hypervisor allocates four primary resources to virtual machines: CPU, memory, storage, and network bandwidth. Each resource type has distinct allocation mechanisms and overcommitment characteristics.
CPU allocation
Virtual CPUs (vCPUs) represent CPU resources available to a virtual machine. A VM configured with 4 vCPUs can execute up to 4 threads simultaneously. The hypervisor scheduler maps vCPU execution onto physical CPU cores, time-slicing physical resources among competing virtual machines.
The relationship between vCPUs and physical cores determines performance characteristics. A host with 16 physical cores can run virtual machines with a combined vCPU count exceeding 16, a practice called CPU overcommitment. The hypervisor scheduler manages contention by allocating time slices to each vCPU. Moderate overcommitment (1.5:1 to 3:1 vCPU to physical core ratio) works well for workloads that do not continuously demand CPU time. Database servers, web applications, and file servers spend significant time waiting for I/O, leaving CPU cycles available for other VMs.
CPU-intensive workloads require more conservative ratios. A host running four VMs that each require sustained 100% CPU utilisation cannot effectively overcommit; the VMs compete for limited physical cycles, and response times increase unpredictably. For latency-sensitive applications, configure 1:1 vCPU to physical core ratios and use CPU pinning to bind specific vCPUs to specific physical cores, eliminating scheduler-induced latency variation.
CPU reservations guarantee minimum CPU capacity for critical VMs. A reservation of 2 GHz ensures the VM receives at least 2 GHz of CPU time regardless of contention from other VMs. CPU limits cap maximum consumption, preventing a single VM from monopolising host resources. A VM limited to 4 GHz cannot exceed that allocation even when the host has idle capacity.
Memory allocation
Memory allocation differs fundamentally from CPU allocation because memory cannot be time-sliced. A byte of RAM assigned to one VM is unavailable to others until explicitly released. This constraint makes memory overcommitment more complex than CPU overcommitment.
Static allocation assigns fixed memory quantities to each VM at boot. A VM configured with 8 GB receives 8 GB from the host’s physical memory pool. Static allocation is predictable but inflexible: memory assigned to idle VMs cannot serve active workloads.
Dynamic memory (Microsoft terminology) or memory ballooning (VMware/KVM terminology) allows the hypervisor to reclaim memory from idle VMs. A balloon driver installed in the guest operating system inflates or deflates on hypervisor command. When the hypervisor needs memory, it instructs the balloon driver to allocate memory within the guest, forcing the guest operating system to page less-used data to disk. The hypervisor then reclaims the balloon-allocated pages for other VMs. When memory pressure decreases, the balloon deflates, returning memory to the guest.
Memory overcommitment through ballooning carries risks. If all VMs simultaneously demand their configured memory, the hypervisor cannot satisfy all requests from physical RAM. Performance degrades severely as the hypervisor swaps VM memory to disk. For production workloads, limit memory overcommitment to 1.2:1 to 1.5:1 and monitor balloon activity as a leading indicator of memory pressure.
Transparent page sharing (TPS) identifies identical memory pages across VMs and stores only one copy, mapping all references to the shared page. When a VM modifies a shared page, the hypervisor creates a private copy (copy-on-write). TPS provides significant memory savings when running multiple VMs with the same operating system and applications. Ten Windows Server VMs share substantial portions of their operating system memory, potentially reducing effective memory consumption by 20-40%.
Storage allocation
Virtual disks present storage to VMs as virtual hard drives. The two primary allocation strategies are thick provisioning and thin provisioning.
Thick provisioning allocates the full configured disk size immediately. A 100 GB virtual disk consumes 100 GB of storage from creation, regardless of how much data the guest actually stores. Thick provisioning guarantees capacity availability and eliminates fragmentation but wastes storage when VMs do not use their full allocation.
Thin provisioning allocates storage on demand. A 100 GB thin-provisioned disk initially consumes only the space required by written data. As the guest writes data, the virtual disk file grows. Thin provisioning enables significant storage overcommitment: a host with 1 TB of storage can present 3 TB of thin-provisioned capacity to VMs, relying on the assumption that not all VMs will fill their disks simultaneously.
Thin provisioning requires monitoring to prevent exhaustion. When physical storage fills, all VMs writing to that storage experience I/O failures. Configure alerts at 70%, 80%, and 90% physical utilisation. At 70%, plan capacity expansion. At 80%, defer new VM provisioning. At 90%, treat as a critical incident requiring immediate attention.
Network allocation
Virtual network interfaces connect VMs to virtual switches managed by the hypervisor. Traffic shaping controls bandwidth allocation per VM or per virtual switch port. A VM limited to 100 Mbps cannot exceed that rate regardless of physical network capacity.
Network I/O Control (VMware) and similar mechanisms in other hypervisors allocate network bandwidth proportionally during contention. A VM with 20 shares receives twice the bandwidth of a VM with 10 shares when the physical network is saturated. When capacity is available, both VMs can use full network speed.
Virtual networking
Virtual switches within the hypervisor connect VM network interfaces to physical networks and to each other. The hypervisor’s virtual switch performs Layer 2 forwarding based on MAC addresses, directing traffic between VMs on the same host without traversing physical networks.
+------------------------------------------------------------------+| HYPERVISOR HOST |+------------------------------------------------------------------+| || +----------+ +----------+ +----------+ +----------+ || | VM 1 | | VM 2 | | VM 3 | | VM 4 | || | vNIC | | vNIC | | vNIC | | vNIC | || +----+-----+ +----+-----+ +----+-----+ +----+-----+ || | | | | || +----+--------------+--------------+--------------+----+ || | | || | VIRTUAL SWITCH (vSwitch) | || | | || | VLAN 10 VLAN 20 VLAN 10 VLAN 20 | || | | || +---------------------+---+---+------------------------+ || | | | || +---+ | +---+ || | | | |+------------------------------------------------------------------+ | | | +------+-------+-------+------+ | PHYSICAL NICs | | (bonded/teamed) | +-------------+---------------+ | v Physical NetworkFigure 3: Virtual switch connecting VMs to VLANs and physical network
Port groups define network configurations applied to VM interfaces. A port group specifies VLAN tagging, security policies, and traffic shaping. VMs connected to the same port group share network configuration and can communicate at Layer 2. Port groups simplify network management: changing a port group’s VLAN assignment affects all connected VMs without individual reconfiguration.
VLAN tagging at the virtual switch extends physical network segmentation into the virtual environment. The hypervisor adds 802.1Q VLAN tags to frames leaving VMs and strips tags from incoming frames. A single physical uplink can carry traffic for multiple VLANs, with the virtual switch directing each VM’s traffic to the appropriate VLAN.
Virtual switches support three isolation modes:
External network mode connects VMs to physical networks through the host’s network adapters. VMs communicate with external systems and with VMs on other hosts.
Internal network mode connects VMs on the same host to each other and to the host operating system, but not to external networks. This mode suits test environments requiring isolation from production networks.
Private network mode connects VMs only to each other, excluding the host operating system. Private networks are completely isolated, useful for network simulations or security testing.
Distributed virtual switches extend virtual switching across multiple hosts, presenting a single logical switch with consistent configuration. VM migration between hosts maintains network connectivity because the distributed switch exists on both source and destination hosts. Distributed switches simplify management in larger deployments but require compatible hypervisor licensing and add complexity.
Storage integration
Virtual machines access storage through several mechanisms, each offering different performance and flexibility tradeoffs.
Local storage uses disks directly attached to the hypervisor host. Virtual disk files reside on the host’s local filesystem. Local storage offers low latency and simplicity but prevents VM migration between hosts because the destination host cannot access the source host’s local disks.
Shared storage through network-attached storage (NAS) or storage area networks (SAN) enables multiple hosts to access the same storage pool. Virtual disk files on shared storage are accessible from any connected host, enabling live migration and high availability features. NAS provides file-level access over NFS or SMB protocols. SAN provides block-level access over iSCSI, Fibre Channel, or NVMe-oF protocols.
+------------------------------------------------------------------+| SHARED STORAGE MODEL |+------------------------------------------------------------------+| || +---------------+ +---------------+ +---------------+ || | Host A | | Host B | | Host C | || | Hypervisor | | Hypervisor | | Hypervisor | || +-------+-------+ +-------+-------+ +-------+-------+ || | | | || | Storage Network (iSCSI, NFS, FC) | || +--------------------+--------------------+ || | || +----------v----------+ || | | || | SHARED STORAGE | || | (NAS or SAN) | || | | || | +--------------+ | || | | VM Disk | | || | | Files | | || | +--------------+ | || | | || +---------------------+ || |+------------------------------------------------------------------+Figure 4: Shared storage enabling multi-host access to VM disks
Software-defined storage aggregates local storage from multiple hosts into a distributed storage pool. Each host contributes local disks to the cluster; the software-defined storage layer presents this aggregated capacity as a single datastore. Data replication across hosts provides redundancy without dedicated storage hardware. Software-defined storage reduces capital costs but requires sufficient hosts (minimum three for meaningful redundancy) and consumes CPU and network resources on hypervisor hosts.
Storage protocol selection affects performance characteristics:
| Protocol | Latency | Throughput | Complexity | Use case |
|---|---|---|---|---|
| Local NVMe | 50-100 μs | 3-7 GB/s | Lowest | Performance-critical, single host |
| iSCSI | 200-500 μs | 1-10 Gb/s | Medium | General purpose shared storage |
| NFS | 200-500 μs | 1-10 Gb/s | Low | File-based workloads, ease of management |
| Fibre Channel | 100-200 μs | 16-32 Gb/s | High | High-performance shared storage |
| NVMe-oF | 100-150 μs | 25-100 Gb/s | High | Latency-sensitive applications |
For organisations without existing SAN infrastructure, iSCSI or NFS over 10 Gb Ethernet provides shared storage capability without Fibre Channel investment. Dedicated storage networks (separate from VM traffic) prevent storage I/O from competing with application network traffic.
High availability
Virtual machine high availability protects against host failures by automatically restarting VMs on surviving hosts. When a host becomes unavailable, the HA cluster detects the failure and powers on affected VMs elsewhere. Recovery time depends on failure detection speed and VM boot time; 2-5 minutes is representative for detection plus restart.
HA clusters require shared storage: surviving hosts must access the failed host’s VM disk files to restart VMs. Without shared storage, HA cannot function because VM data is inaccessible after host failure.
+------------------------------------------------------------------+| HIGH AVAILABILITY CLUSTER |+------------------------------------------------------------------+| || +---------------+ +---------------+ +---------------+ || | Host A | | Host B | | Host C | || | +-----------+ | | +-----------+ | | +-----------+ | || | | VM 1 | | | | VM 4 | | | | VM 7 | | || | +-----------+ | | +-----------+ | | +-----------+ | || | | VM 2 | | | | VM 5 | | | | VM 8 | | || | +-----------+ | | +-----------+ | | +-----------+ | || | | VM 3 | | | | VM 6 | | | | || | +-----------+ | | +-----------+ | | | || +-------+-------+ +-------+-------+ +-------+-------+ || | | | || +--------------------+--------------------+ || | || +-------------v--------------+ || | SHARED STORAGE | || | (accessible by all hosts) | || +----------------------------+ |+------------------------------------------------------------------+
Host A failure scenario:+------------------------------------------------------------------+| || +---------------+ +---------------+ +---------------+ || | Host A | | Host B | | Host C | || | FAILED | | +-----------+ | | +-----------+ | || | X | | | VM 4 | | | | VM 7 | | || | | | +-----------+ | | +-----------+ | || | | | | VM 5 | | | | VM 8 | | || | | | +-----------+ | | +-----------+ | || | | | | VM 6 | | | | VM 1 * | | || | | | +-----------+ | | +-----------+ | || | | | | VM 2 * | | | | VM 3 * | | || | | | +-----------+ | | +-----------+ | || +---------------+ +-------+-------+ +-------+-------+ || | | || * = VMs restarted +--------------------+ || after failure | || +-------------v-------------+ || | SHARED STORAGE | || +---------------------------+ |+------------------------------------------------------------------+Figure 5: HA cluster redistributing VMs after host failure
Admission control prevents overcommitting cluster resources beyond the capacity to survive host failures. An admission control policy specifying one host failure tolerance reserves sufficient CPU and memory capacity to restart all VMs from one failed host on survivors. Without admission control, the cluster might run at capacity that leaves no room for failover, causing some VMs to remain offline after a failure.
Admission control calculation for a three-host cluster with one host failure tolerance: the cluster must reserve 33% of total resources for failover capacity. If each host has 64 GB RAM, the cluster’s 192 GB total includes 64 GB reserved, leaving 128 GB available for VM allocation.
VM restart priority determines the order VMs restart after failure. Critical infrastructure VMs (domain controllers, DNS servers, databases) should restart before dependent application servers. Configuring restart priority ensures services become available in an order that respects dependencies.
Live migration
Live migration moves running VMs between hosts without service interruption. The source host transfers VM memory to the destination while the VM continues executing. Memory pages modified during transfer are re-sent until the remaining changed data is small enough for a brief pause (20-200 milliseconds) to transfer final state and switch execution to the destination.
Live migration requires:
- Shared storage accessible from both hosts (or storage migration)
- Compatible CPU features between hosts
- Network connectivity between hosts for memory transfer
- Matching virtual switch configuration on the destination
Storage migration moves VM disk files between storage locations. When combined with live migration, a VM can move between hosts with different local storage, eliminating the shared storage requirement at the cost of longer migration time proportional to disk size.
Migration bandwidth affects completion time. Migrating a VM with 32 GB of actively-used memory over a 1 Gb/s link requires at least 256 seconds for initial memory transfer, plus additional time for retransmitting changed pages. A 10 Gb/s migration network reduces this to approximately 26 seconds. For VMs with high memory churn rates (databases with large buffer pools, in-memory caches), even 10 Gb/s migration links may struggle to converge.
Backup and replication
VM-level backup captures virtual disk files and configuration from outside the guest operating system. The hypervisor quiesces the VM (pausing I/O momentarily), creates a snapshot, and backup software copies the snapshot data. This approach backs up entire VMs including operating system, applications, and data without requiring agents inside each guest.
Snapshots preserve VM state at a point in time. Creating a snapshot redirects new writes to a delta file while preserving the original disk. The VM continues running against the delta file. Reverting to the snapshot discards the delta, returning the VM to its snapshot state. Deleting a snapshot merges the delta back into the base disk.
Snapshots are not backups. Snapshot delta files grow with every write to the VM, consuming storage and degrading performance. Long-running snapshots (over 72 hours) create management problems: large deltas take hours to merge and risk corruption if interrupted. Use snapshots for short-term protection during maintenance, not for long-term data protection.
VM replication continuously copies VM state to a secondary site for disaster recovery. Asynchronous replication sends changes on a schedule (every 15 minutes, hourly), providing a recovery point at the last successful replication cycle. Synchronous replication mirrors every write to the secondary site before acknowledging to the VM, providing zero data loss at the cost of latency proportional to the distance between sites.
Replication RPO (recovery point objective) represents maximum data loss. Asynchronous replication with a 15-minute cycle has a 15-minute RPO: a failure could lose up to 15 minutes of changes made since the last replication. Synchronous replication achieves near-zero RPO but requires low-latency connectivity (under 5 milliseconds round-trip) between sites.
Technology options
Open source
Proxmox VE combines the KVM hypervisor with container support and a web management interface. Proxmox includes clustering, high availability, backup, and replication in the freely-available version. Commercial support subscriptions are available but not required for production use. Proxmox suits organisations wanting enterprise features without per-socket licensing costs.
KVM (Kernel-based Virtual Machine) is the Linux kernel’s native hypervisor. KVM converts Linux into a Type 1 hypervisor, with virtual machines managed through libvirt and QEMU. oVirt provides a web interface and management layer over KVM for larger deployments. KVM requires Linux administration skills; there is no vendor to call for support unless engaging a third-party provider.
XCP-ng is an open source fork of Citrix Hypervisor (formerly XenServer). XCP-ng provides enterprise features including live migration, high availability, and central management through Xen Orchestra. The project maintains feature parity with commercial Xen distributions while remaining freely available.
Commercial with nonprofit programmes
VMware vSphere is the most widely deployed commercial hypervisor, with extensive ecosystem integration and mature management tools. VMware offers discounted licensing through TechSoup and similar programmes in some regions. vSphere’s feature depth and stability suit organisations with existing VMware skills or complex requirements. Licensing costs per socket and per VM (depending on edition) create significant ongoing expense.
Microsoft Hyper-V integrates with Windows Server and is included in Windows Server licences. Organisations already purchasing Windows Server licensing receive Hyper-V at no additional cost. System Center Virtual Machine Manager provides advanced management for larger deployments. Hyper-V’s tight Windows integration benefits Windows-centric environments but offers less mature Linux guest support than competitors.
| Platform | Licence model | HA included | Management interface | Linux support |
|---|---|---|---|---|
| Proxmox VE | Open source, optional subscription | Yes | Web UI, CLI, API | Native (Debian-based) |
| KVM/oVirt | Open source | With oVirt | Web UI (oVirt), CLI | Native |
| XCP-ng | Open source | Yes | Xen Orchestra | Excellent |
| VMware vSphere | Per-socket + per-VM | Essentials Plus and above | vCenter, web client | Excellent |
| Hyper-V | Windows Server licence | Yes | Windows Admin Center, SCVMM | Good |
Implementation considerations
For organisations with limited IT capacity
A single-host Proxmox VE or Hyper-V deployment provides virtualisation benefits without clustering complexity. Running four to eight VMs on a single server improves resource utilisation and simplifies backup (VM-level snapshots) compared to equivalent physical servers. Without clustering, a host failure causes downtime until the host is repaired or VMs are restored from backup on replacement hardware. For many organisations, this recovery time is acceptable given the operational simplicity.
Minimum viable configuration: one server with 64 GB RAM, 8 CPU cores, and 2 TB SSD storage can run approximately 6-10 VMs depending on workload. Use local storage; defer shared storage investment until adding a second host. Configure VM-level backups to a NAS device or cloud storage. This configuration costs under $5,000 for hardware and incurs no licensing fees with Proxmox VE or Hyper-V on existing Windows Server licences.
For organisations with established IT functions
Three-host clusters provide meaningful high availability. Shared storage (iSCSI or NFS from a NAS device, or software-defined storage using local disks) enables live migration and automatic VM restart after host failure. Admission control should reserve capacity for one host failure.
Standardise on a single hypervisor platform to concentrate expertise. Mixed environments (some VMware, some Hyper-V, some KVM) fragment skills and complicate management. If VMware licensing costs are prohibitive, Proxmox VE provides equivalent functionality for clustered deployments.
Consider CPU generation compatibility when purchasing hosts. VMs can only live-migrate between hosts with compatible CPU features. Purchasing identical server models simplifies compatibility, but modern hypervisors can mask advanced CPU features to enable migration to older hosts at some performance cost.
Field deployment
Field offices with constrained bandwidth face challenges with centralised virtualisation management. A standalone hypervisor host at each field office avoids dependency on WAN connectivity for daily operations. Backup to headquarters occurs over the WAN during off-hours; VM-level replication to headquarters provides disaster recovery capability.
Power instability in field locations requires UPS protection with graceful VM shutdown. Configure the hypervisor to receive UPS alerts and shut down VMs cleanly before battery exhaustion. A 1500 VA UPS provides approximately 15-20 minutes of runtime for a single server, sufficient for controlled shutdown.
Hardware selection for field deployment should prioritise reliability over performance. Servers with redundant power supplies and error-correcting memory reduce the risk of failures in locations where replacement takes days or weeks. Industrial-grade or ruggedised servers suit environments with temperature or dust concerns.
Migration from physical servers
Physical-to-virtual (P2V) conversion migrates existing physical servers to VMs. Conversion tools create virtual disk images from physical server drives, allowing the server to boot as a VM. P2V migration preserves applications and configuration but may require driver updates for virtualised hardware.
Migration sequence:
- Inventory physical servers, documenting CPU, memory, storage, and network requirements
- Provision hypervisor hosts with capacity for converted workloads plus 30% growth
- Convert less-critical systems first (file servers, print servers) to build operational experience
- Test converted VMs thoroughly before decommissioning physical hardware
- Convert critical systems during maintenance windows with rollback plans
- Retain physical servers for 30 days after conversion as rollback option
Size VM resources based on actual physical server utilisation, not physical hardware specifications. A physical server with 64 GB RAM using only 16 GB should become a VM with 16-24 GB RAM. Virtualisation provides an opportunity to right-size resources.