Backup Recovery
Backup recovery restores data and systems from protected copies when primary data becomes unavailable, corrupted, or compromised. This playbook covers file-level restoration, full system recovery, database point-in-time recovery, and immutable backup access for ransomware scenarios. Recovery operations range from single-file requests completed in minutes to full infrastructure rebuilds spanning days.
Activation criteria
Invoke this playbook when any of the following conditions exist:
| Indicator | Activation threshold |
|---|---|
| Data loss confirmed | Any production data deleted or inaccessible |
| Data corruption detected | File integrity check failures, database consistency errors |
| System unrecoverable | Operating system or application will not start after troubleshooting |
| Ransomware encryption | Files encrypted (coordinate with Ransomware Response playbook) |
| Recovery request | Authorised request for data from specific point in time |
| Failed migration rollback | System migration failed, rollback to backup required |
Ransomware coordination
If recovery follows ransomware detection, execute this playbook only after containment is confirmed. The Ransomware Response playbook must clear the environment before restoration begins. Restoring to compromised infrastructure re-encrypts recovered data.
Roles
| Role | Responsibility | Typical assignee | Backup |
|---|---|---|---|
| Recovery lead | Coordinates recovery sequence, makes restore decisions, tracks progress | Senior systems administrator | Infrastructure manager |
| Backup operator | Executes restore operations, manages backup infrastructure | Systems administrator | Recovery lead |
| Application owner | Validates recovered data, confirms application functionality | Application administrator or business owner | IT manager |
| Database administrator | Executes database recovery, validates consistency | DBA or senior sysadmin | Backup operator |
| Communications coordinator | Updates stakeholders on progress and timelines | Service desk manager | IT manager |
Recovery type selection
Before beginning restoration, determine the appropriate recovery approach based on what failed and what outcome is required.
+------------------------+ | What needs recovery? | +----------+-------------+ | +-------------------------+-------------------------+ | | | v v v+------------------+ +--------------------+ +--------------------+| Individual files | | Complete system | | Database || or folders | | (OS + applications)| | |+--------+---------+ +---------+----------+ +---------+----------+ | | | v v v+------------------+ +--------------------+ +--------------------+| File-level | | Full system | | Point-in-time or || restore | | restore | | transaction log || (Phase 2A) | | (Phase 2B) | | replay (Phase 2C) |+--------+---------+ +---------+----------+ +---------+----------+ | | | | v | | +--------------------+ | | | Immutable backup | | | | required? | | | +---------+----------+ | | | | | +---------+---------+ | | | | | | v v | | +----------------+ +----------------+ | | | Yes: Air-gap | | No: Standard | | | | access | | restore | | | | (Phase 2B-I) | | (Phase 2B-S) | | | +----------------+ +----------------+ | | | +------------------------+-------------------------+ | v +------------------------+ | Validation and handoff | | (Phase 3) | +------------------------+Figure 1: Recovery type decision flow
Phase 1: Assessment and preparation
Objective: Confirm recovery requirements, verify backup availability, and prepare recovery environment Timeframe: 15-60 minutes
Document the recovery request with specific details:
- What data or systems require recovery
- When data loss or corruption occurred (if known)
- Target recovery point (most recent, specific date/time, or pre-incident)
- Requester identity and authorisation level
- Business impact and urgency
Verify authorisation for the recovery. File-level restores for a user’s own data require that user’s request or their manager’s approval. System restores require IT management approval. Restores affecting shared data require data owner approval.
Identify the recovery point objective. Determine the latest acceptable data loss:
Recovery scenarios and typical RPO:
Accidental deletion (known time): Restore from backup immediately before deletion Corruption (unknown onset): Restore from last known-good backup, verify integrity Ransomware: Restore from immutable backup before infection date Hardware failure: Restore from most recent backup Migration rollback: Restore from pre-migration backup- Query the backup catalogue to confirm backup availability for the target recovery point:
# Restic example: list snapshots for specific path restic snapshots --path /data/finance --json | jq '.[] | {time, id, paths}'
# Veeam PowerShell: find restore points Get-VBRRestorePoint -Backup "FileServer-Daily" | Where-Object {$_.CreationTime -gt (Get-Date).AddDays(-7)} | Select-Object Name, CreationTime, Type
# Commvault: list backup jobs qlist backup -c FileServerSubclient -last 10Record the backup job ID, timestamp, and storage location for the selected restore point.
- Verify backup integrity before restoration. Never restore from an unverified backup:
# Restic: verify snapshot integrity restic check --read-data-subset=5%
# Borg: verify archive borg check --verify-data repository::archive-name
# For tape-based backups, verify media readability mt -f /dev/st0 statusIf verification fails, select an alternative restore point and re-verify.
Prepare the recovery target environment:
- For file restores: confirm destination has sufficient space (restored size plus 20% overhead)
- For system restores: provision target VM or physical hardware matching or exceeding original specifications
- For database restores: ensure database engine version compatibility and adequate storage
Notify affected users that recovery is beginning. Provide estimated completion time based on data volume:
Data volume Estimated restore time (disk-based backup) Estimated restore time (tape/cloud) Under 10 GB 5-15 minutes 15-45 minutes 10-100 GB 15-60 minutes 1-4 hours 100 GB - 1 TB 1-4 hours 4-12 hours 1-10 TB 4-24 hours 1-3 days Over 10 TB 1-7 days 3-14 days
Decision point: If the required backup is unavailable or corrupted, escalate to IT management. Recovery may require forensic data recovery services or acceptance of data loss.
Checkpoint: Before proceeding, confirm:
- Authorisation documented
- Backup existence and integrity verified
- Recovery target prepared with adequate resources
- Users notified with timeline
Phase 2A: File-level restore
Objective: Recover specific files or folders to original or alternate location Timeframe: 5 minutes to 4 hours depending on volume
- Mount or browse the backup containing the target files:
# Restic: mount backup for browsing restic mount --snapshot abc123 /mnt/restore
# Borg: mount archive borg mount repository::archive-2024-01-15 /mnt/restore
# Duplicati: use web interface to browse backup contents # Navigate to Restore > select backup > browse filesFor agent-based backup systems (Veeam, Commvault, Rubrik), use the management console to initiate file-level recovery and browse the backup catalogue.
Locate the specific files requiring recovery. Verify you have identified the correct versions by checking file timestamps and sizes against expected values.
Determine restore destination:
- Original location: Overwrites current files. Use when current files are corrupted or deleted and users need immediate access.
- Alternate location: Restores to staging area. Use when comparison with current files is needed or when original location has insufficient space.
# Restore to original location restic restore abc123 --target / --include /data/finance/reports/
# Restore to alternate location restic restore abc123 --target /restore/staging --include /data/finance/reports/- Execute the restore operation. Monitor progress and record completion time:
# Restic with progress restic restore abc123 --target /restore/staging --verbose
# rsync from mounted backup with progress rsync -avh --progress /mnt/restore/data/finance/ /restore/staging/finance/- Verify restored file integrity:
# Compare file counts find /restore/staging -type f | wc -l
# Verify checksums if original checksums are available sha256sum -c /data/checksums/finance-checksums.txt
# Spot-check file contents (open representative files, verify readability) head -100 /restore/staging/finance/reports/q4-2024.xlsx | file -- Move restored files to final location if using alternate restore:
# Move with preserved permissions rsync -avh --remove-source-files /restore/staging/finance/ /data/finance/- Adjust permissions if necessary. Restored files inherit permissions from backup, which may not match current requirements:
# Restore original ownership chown -R finance-svc:finance-team /data/finance/reports/
# Apply standard permissions chmod -R 750 /data/finance/reports/Checkpoint: Files restored to correct location, integrity verified, permissions appropriate, users can access files.
Phase 2B: Full system restore
Objective: Recover complete operating system, applications, and data to operational state Timeframe: 2-24 hours depending on system size and backup location
Full system restoration applies when the operating system is unrecoverable, hardware has failed, or a complete rollback to a previous state is required. This procedure assumes image-based or block-level backups exist.
Phase 2B-S: Standard full restore
Use standard restore when recovering from hardware failure, corruption, or migration rollback where no security compromise occurred.
Provision the recovery target:
For virtual machines:
- Create new VM matching original specifications - CPU: equal or greater core count - Memory: equal or greater allocation - Storage: equal or greater capacity - Network: same VLAN/subnet as originalFor physical servers:
- Install replacement hardware or repair failed components - Configure RAID controller to match original configuration - Verify boot device is accessibleBoot the recovery target from backup recovery media. Most backup solutions provide bootable ISO images:
- Veeam: Boot from Veeam Recovery Media ISO
- Acronis: Boot from Acronis Bootable Media
- Restic/Borg: Boot from live Linux distribution with backup tools installed
- Windows Server Backup: Boot from Windows installation media, select “Repair your computer”
Connect to backup storage from the recovery environment:
# Mount network backup repository mount -t nfs backup-server:/repository /mnt/backup
# Or mount CIFS/SMB share mount -t cifs //backup-server/repository /mnt/backup -o user=backup-svc
# For cloud storage, configure credentials export AWS_ACCESS_KEY_ID="key" export AWS_SECRET_ACCESS_KEY="secret" restic -r s3:s3.amazonaws.com/bucket-name snapshots- Select the restore point identified in Phase 1 and begin restoration:
# Restic: restore entire system restic restore abc123 --target /mnt/target-disk
# For image-based restore (Veeam, Acronis), use GUI: # - Select backup file/repository # - Choose restore point by date # - Map disks (source disk -> target disk) # - Begin restore- Monitor restore progress. For large systems, expect:
Restore throughput estimates:
Local disk to local disk: 100-300 MB/s (1 TB in 1-3 hours) Network (1 Gbps): 80-100 MB/s (1 TB in 3-4 hours) Network (10 Gbps): 400-800 MB/s (1 TB in 20-45 min) Cloud storage: 50-200 MB/s (1 TB in 2-6 hours) Tape: 100-400 MB/s (1 TB in 1-3 hours)- After restore completes, reconfigure bootloader if necessary:
# Linux: reinstall GRUB grub-install /dev/sda grub-mkconfig -o /boot/grub/grub.cfg
# Windows: repair boot configuration bootrec /fixmbr bootrec /fixboot bootrec /rebuildbcdBoot the restored system. Do not connect to production network until verification is complete.
Verify restored system state:
- Operating system boots successfully
- Core services start automatically
- File systems mount correctly
- Network configuration is appropriate (may need adjustment for new hardware)
- Application services respond
Phase 2B-I: Immutable backup restore
Use immutable restore when recovering from ransomware, malicious deletion, or when standard backups may have been compromised. Immutable backups are write-once copies stored on media or in repositories that prevent modification or deletion for a defined retention period.
Access the air-gapped or immutable backup repository. This requires physical access or out-of-band management depending on implementation:
For air-gapped tape:
- Retrieve tapes from secure offsite storage - Load tapes into isolated restore server (not connected to production network) - Verify tape seals and chain of custody documentationFor immutable cloud storage (S3 Object Lock, Azure Immutable Blob):
# Verify object lock status aws s3api get-object-retention --bucket backup-bucket --key backup-2024-01-15.tar
# Access requires privileged credentials stored separately from production # Retrieve credentials from secure vault or break-glass procedureFor immutable disk repository (with WORM or retention lock):
# Connect to isolated management network # Access repository through dedicated management interface # Verify retention lock is active and backup predates incident- Verify the selected backup predates the security incident by at least 24 hours. Check backup timestamps against incident timeline:
# List available immutable backups with timestamps restic -r /mnt/immutable-repo snapshots --json | jq '.[] | select(.time < "2024-01-14T00:00:00Z") | {id, time, paths}'- Provision a clean recovery environment isolated from production:
Recovery environment requirements:
- Separate VLAN with no production network connectivity - Freshly installed hypervisor or clean physical hardware - No domain join until validation complete - Antimalware with current definitions - Network monitoring activeRestore to the isolated environment following Phase 2B-S procedures.
Before connecting to production, perform security validation:
# Scan restored files for known ransomware signatures clamscan -r /mnt/restored-system --infected --log=/var/log/restore-scan.log
# Check for known ransomware file extensions find /mnt/restored-system -type f \( -name "*.encrypted" -o -name "*.locked" \ -o -name "*.crypted" -o -name "*.crypt" \) -print
# Verify critical executable integrity against known-good hashes sha256sum /mnt/restored-system/Windows/System32/cmd.exe # Compare against published Microsoft hashes or pre-incident baseline- If validation passes, coordinate with security team to reconnect restored system to production following containment confirmation.
Isolation requirement
Never restore from immutable backups directly to production infrastructure during a security incident. Restored systems must be validated in isolation first. Premature reconnection risks re-infection or spread of undetected malware.
Checkpoint: System restored, boots successfully, services operational, security validation passed (for immutable restore).
Phase 2C: Database recovery
Objective: Restore database to consistent state at specific point in time Timeframe: 30 minutes to 12 hours depending on database size and recovery type
Database recovery requires coordination between backup restoration and database engine recovery mechanisms. The procedure differs by database platform and recovery scenario.
+----------------------------------------------------------------------+| DATABASE RECOVERY TIMELINE |+----------------------------------------------------------------------+| || Full Backup Transaction Logs Target Recovery || (Sunday 02:00) (Continuous) Point || | | | || v v v || +---------+ +-----------------------+ +-----------+ || | | | [ ][ ][ ][ ][ ][ ] | | | || | FULL | | LOG BACKUPS | | RESTORE | || | BACKUP | | (15 min interval) | | TARGET | || | | | [ ][ ][ ][ ][ ][ ] | | | || +----+----+ +-----------+-----------+ +-----------+ || | | || | +-- Tuesday 14:30 (Corruption detected) || | | || | +-- Tuesday 14:15 (Last valid log) || | | || | +-- Tuesday 14:00 (Target RPO) || | || +--------->-----------+ || | || [ PROCESS: Full Restore -> Log Replay -> Point-in-Time ] || |+----------------------------------------------------------------------+Figure 2: Point-in-time database recovery using full backup plus transaction logs
PostgreSQL recovery
- Stop the database service if running:
systemctl stop postgresql- Clear or move the existing data directory:
mv /var/lib/postgresql/14/main /var/lib/postgresql/14/main.corrupted mkdir /var/lib/postgresql/14/main chown postgres:postgres /var/lib/postgresql/14/main chmod 700 /var/lib/postgresql/14/main- Restore the base backup:
# From pg_basebackup archive tar -xzf /backup/base/base-2024-01-14.tar.gz -C /var/lib/postgresql/14/main
# Or from Restic/Borg restic restore abc123 --target /var/lib/postgresql/14/main --include /var/lib/postgresql/14/main- Configure recovery parameters for point-in-time recovery. Create
/var/lib/postgresql/14/main/postgresql.auto.conf:
restore_command = 'cp /backup/wal/%f %p' recovery_target_time = '2024-01-15 14:00:00+00' recovery_target_action = 'promote'- Create the recovery signal file:
touch /var/lib/postgresql/14/main/recovery.signal chown postgres:postgres /var/lib/postgresql/14/main/recovery.signal- Start PostgreSQL:
systemctl start postgresqlMonitor recovery progress in logs:
tail -f /var/log/postgresql/postgresql-14-main.log- Verify recovery completion and data consistency:
-- Check recovery status SELECT pg_is_in_recovery(); -- Should return 'f' after recovery completes
-- Verify timeline SELECT timeline_id FROM pg_control_checkpoint();
-- Run application-specific consistency checks SELECT COUNT(*) FROM critical_table;SQL Server recovery
Connect to SQL Server Management Studio or sqlcmd with sysadmin privileges.
Identify the backup chain:
-- List backup history SELECT database_name, backup_start_date, backup_finish_date, type, backup_size / 1024 / 1024 AS size_mb FROM msdb.dbo.backupset WHERE database_name = 'ProductionDB' AND backup_start_date > DATEADD(day, -7, GETDATE()) ORDER BY backup_start_date DESC;- Restore the full backup with NORECOVERY to allow subsequent log restores:
RESTORE DATABASE ProductionDB FROM DISK = 'D:\Backup\ProductionDB_Full_20240114.bak' WITH NORECOVERY, MOVE 'ProductionDB' TO 'E:\Data\ProductionDB.mdf', MOVE 'ProductionDB_log' TO 'F:\Log\ProductionDB_log.ldf', REPLACE;- Restore differential backup if available:
RESTORE DATABASE ProductionDB FROM DISK = 'D:\Backup\ProductionDB_Diff_20240115.bak' WITH NORECOVERY;- Restore transaction logs up to target recovery time:
-- Restore all logs except the last RESTORE LOG ProductionDB FROM DISK = 'D:\Backup\ProductionDB_Log_20240115_1200.trn' WITH NORECOVERY;
RESTORE LOG ProductionDB FROM DISK = 'D:\Backup\ProductionDB_Log_20240115_1215.trn' WITH NORECOVERY;
-- Restore final log with point-in-time target RESTORE LOG ProductionDB FROM DISK = 'D:\Backup\ProductionDB_Log_20240115_1400.trn' WITH RECOVERY, STOPAT = '2024-01-15 14:00:00';- Verify database state:
-- Check database is online SELECT name, state_desc FROM sys.databases WHERE name = 'ProductionDB';
-- Run consistency check DBCC CHECKDB ('ProductionDB') WITH NO_INFOMSGS;MySQL/MariaDB recovery
- Stop the database service:
systemctl stop mariadb- Clear the data directory:
mv /var/lib/mysql /var/lib/mysql.corrupted mkdir /var/lib/mysql chown mysql:mysql /var/lib/mysql- Restore from physical backup (if using Mariabackup or Percona XtraBackup):
# Prepare the backup mariabackup --prepare --target-dir=/backup/full-2024-01-14
# Restore mariabackup --copy-back --target-dir=/backup/full-2024-01-14
chown -R mysql:mysql /var/lib/mysql- For point-in-time recovery, apply binary logs after restoration:
# Start MySQL to apply logs systemctl start mariadb
# Apply binary logs up to target time mysqlbinlog --stop-datetime="2024-01-15 14:00:00" \ /backup/binlog/mariadb-bin.000042 \ /backup/binlog/mariadb-bin.000043 | mysql -u root -p- Verify recovery:
-- Check tables CHECK TABLE critical_table;
-- Verify row counts match expectations SELECT COUNT(*) FROM critical_table;Checkpoint: Database online, consistency checks pass, application can connect, data matches expected state for recovery point.
Phase 3: Validation and handoff
Objective: Confirm recovery success and return systems to production operation Timeframe: 30 minutes to 4 hours
Execute the validation checklist appropriate to recovery type:
File recovery validation:
[ ] File count matches expected [ ] File sizes are correct [ ] Files are readable (spot check 5-10 files) [ ] Checksums match (if baseline available) [ ] Permissions allow appropriate access [ ] Owner/requester confirms content is correctSystem recovery validation:
[ ] System boots without errors [ ] All file systems mount correctly [ ] Network connectivity established [ ] Authentication services functional [ ] Core application services running [ ] Application responds to health checks [ ] Database connectivity verified [ ] Scheduled tasks configured and enabled [ ] Monitoring agent reporting [ ] Antimalware running with current definitionsDatabase recovery validation:
[ ] Database online and accepting connections [ ] DBCC CHECKDB or equivalent passes [ ] Application can connect and query [ ] Transaction counts match expectations [ ] Foreign key relationships intact [ ] Indexes present and valid [ ] Replication/mirroring re-established (if applicable)- Document the recovery in the backup log:
Recovery Record --------------- Date/time completed: 2024-01-15 16:45:00 UTC Recovery lead: J. Smith
Request details: - Requester: Finance team manager - Authorisation: IT Manager approval #IR-2024-0142 - Reason: Accidental deletion of Q4 reports folder
Recovery details: - Source backup: FileServer-Daily-2024-01-14-0200 - Backup timestamp: 2024-01-14 02:15:00 UTC - Data volume: 2.4 GB (847 files) - Restore duration: 12 minutes - Target location: /data/finance/reports/ (original location)
Validation: - File count verified: 847 files - Requester confirmed data completeness - No corruption detected
Notes: - None- Notify stakeholders of completion:
Subject: Recovery Complete - Finance Reports Data
The data recovery requested for the Finance Reports folder has been completed successfully.
Summary: - Data restored from backup dated 2024-01-14 02:15 UTC - 847 files (2.4 GB) recovered to original location - Finance team manager has verified data completeness
The data is now available at the original location: \\fileserver\finance\reports\
If you discover any issues with the recovered data, please contact the IT Service Desk immediately.
Reference: IR-2024-0142Update monitoring and alerting:
- Confirm system is reporting to monitoring platform
- Verify backup jobs are scheduled and will execute on schedule
- Re-enable any alerts that were suppressed during recovery
For system or database recovery, schedule follow-up backup:
- Force immediate backup after recovery to establish new baseline
- Verify backup completes successfully
Checkpoint: Recovery documented, stakeholders notified, monitoring active, backup schedule confirmed.
Communications
Update stakeholders at defined intervals throughout recovery operations.
| Stakeholder | Initial notification | Update frequency | Final notification |
|---|---|---|---|
| Affected users | Within 15 minutes of starting | Every 2 hours for extended recoveries | On completion |
| IT management | Immediately for system recovery | On status change or every 4 hours | On completion with summary |
| Application owners | Within 30 minutes | On status change | On completion with validation results |
| Executive leadership | For outages over 4 hours | Daily | On completion |
Initial notification template
Subject: Data Recovery in Progress - [System/Data Name]
A data recovery operation is currently in progress for [system/data name].
Started: [time]Expected completion: [time estimate]Reason: [brief description]
Impact during recovery:- [List any service interruptions or limitations]
We will provide updates every [interval] hours.
For questions, contact the IT Service Desk.
Reference: [ticket number]Completion notification template
Subject: Recovery Complete - [System/Data Name]
The recovery operation for [system/data name] has completed successfully.
Summary:- Recovery point: [backup date/time]- Duration: [time]- Data recovered: [volume or description]
The [system/data] is now available for normal use.
[Any follow-up actions required by users]
Reference: [ticket number]Evidence preservation
Maintain records of all recovery operations for audit and compliance purposes.
Preserve the following for each recovery operation:
- Original recovery request (email, ticket, or documented verbal request)
- Authorisation record
- Backup selection documentation (which backup, why selected)
- Integrity verification results
- Recovery command output or job logs
- Validation checklist (completed)
- Stakeholder notification records
Retain recovery records according to data retention policy, typically:
- Standard file recovery: 12 months
- System recovery: 36 months
- Recovery following security incident: 7 years
Store recovery documentation in:
- IT service management system (ticket attachment)
- Backup operations log (central record)
- Security incident record (if applicable)
SaaS and cloud application recovery
Recovery from cloud-hosted applications follows provider-specific procedures rather than infrastructure restore.
Microsoft 365 recovery
Microsoft 365 provides native retention and recovery capabilities. For deleted content within retention periods:
SharePoint/OneDrive file recovery:1. Navigate to site or OneDrive recycle bin2. Select items to restore3. Click "Restore" to return to original location
Retention periods:- First-stage recycle bin: 93 days- Second-stage recycle bin: 93 days total (admin access required)- Recoverable Items folder (Exchange): 14-30 days (configurable)For recovery beyond retention periods or tenant-level disasters, use third-party Microsoft 365 backup solutions (Veeam for Microsoft 365, Commvault, Rubrik) following those products’ restore procedures.
Google Workspace recovery
Drive file recovery:1. Navigate to drive.google.com/drive/trash2. Select items to restore3. Right-click > Restore
Admin recovery (for deleted users):1. Admin console > Directory > Users > deleted users2. Select user > Restore3. Data restores to restored user account
Retention:- User trash: 25 days- Admin trash: 25 days after user trash expiry- Vault retention: per retention policy (can be indefinite)General SaaS recovery
For other SaaS applications:
- Check application’s native backup/export capabilities
- Check recycle bin or trash functionality
- Review audit logs for deletion events
- Contact vendor support for recovery options
- Restore from third-party SaaS backup if deployed
Troubleshooting
| Symptom | Cause | Resolution |
|---|---|---|
| Backup not found for requested date | Backup failed on that date, or retention expired | Query backup logs to find nearest available backup; may need to accept different recovery point |
| Backup integrity check fails | Backup corrupted during storage or media failure | Select alternative restore point; if multiple backups corrupted, investigate storage infrastructure |
| Restore runs but database won’t start | Transaction logs incomplete or corrupt | Restore to point before log corruption; accept data loss from missing transactions |
| Restored files have wrong permissions | Backup agent ran as different user or ACLs changed | Manually reset permissions; document expected permissions for future recoveries |
| Full system restore boots but services fail | Configuration drift since backup; dependencies missing | Compare current and backup configurations; restore missing dependencies |
| Database consistency check fails after restore | Incomplete restore or corruption in backup | Re-attempt restore; if repeated failures, restore from different backup and manually replay critical transactions |
| Restore completes but users report missing data | Wrong backup selected; data not in backup scope | Verify backup contents before restore; check if data was in excluded paths |
| Network-based restore extremely slow | Network congestion or misconfiguration | Verify network path throughput; consider restoring to local staging then copying |
| Immutable backup inaccessible | Credentials expired or air-gap access procedure unclear | Follow break-glass procedure for immutable backup access credentials |
| Cloud restore quota exceeded | Provider limits on restore bandwidth or operations | Stagger restore operations; contact provider for quota increase |
| Restored VM won’t boot in different hypervisor | Hardware abstraction layer incompatibility | Use P2V/V2V conversion tools; reconfigure drivers post-restore |
| Tape read errors during restore | Media degradation or drive alignment | Try alternate tape drive; if repeated failures, tape may be unrecoverable |
See also
- Backup Systems for backup architecture and configuration
- Backup Verification for ongoing verification procedures
- Ransomware Response for coordinating recovery after ransomware
- Data Backup and Recovery for data management perspective
- DR Site Invocation when full disaster recovery is required
- High Availability and Disaster Recovery for architecture concepts