Skip to main content

Backup Recovery

Backup recovery restores data and systems from protected copies when primary data becomes unavailable, corrupted, or compromised. This playbook covers file-level restoration, full system recovery, database point-in-time recovery, and immutable backup access for ransomware scenarios. Recovery operations range from single-file requests completed in minutes to full infrastructure rebuilds spanning days.

Activation criteria

Invoke this playbook when any of the following conditions exist:

IndicatorActivation threshold
Data loss confirmedAny production data deleted or inaccessible
Data corruption detectedFile integrity check failures, database consistency errors
System unrecoverableOperating system or application will not start after troubleshooting
Ransomware encryptionFiles encrypted (coordinate with Ransomware Response playbook)
Recovery requestAuthorised request for data from specific point in time
Failed migration rollbackSystem migration failed, rollback to backup required

Ransomware coordination

If recovery follows ransomware detection, execute this playbook only after containment is confirmed. The Ransomware Response playbook must clear the environment before restoration begins. Restoring to compromised infrastructure re-encrypts recovered data.

Roles

RoleResponsibilityTypical assigneeBackup
Recovery leadCoordinates recovery sequence, makes restore decisions, tracks progressSenior systems administratorInfrastructure manager
Backup operatorExecutes restore operations, manages backup infrastructureSystems administratorRecovery lead
Application ownerValidates recovered data, confirms application functionalityApplication administrator or business ownerIT manager
Database administratorExecutes database recovery, validates consistencyDBA or senior sysadminBackup operator
Communications coordinatorUpdates stakeholders on progress and timelinesService desk managerIT manager

Recovery type selection

Before beginning restoration, determine the appropriate recovery approach based on what failed and what outcome is required.

+------------------------+
| What needs recovery? |
+----------+-------------+
|
+-------------------------+-------------------------+
| | |
v v v
+------------------+ +--------------------+ +--------------------+
| Individual files | | Complete system | | Database |
| or folders | | (OS + applications)| | |
+--------+---------+ +---------+----------+ +---------+----------+
| | |
v v v
+------------------+ +--------------------+ +--------------------+
| File-level | | Full system | | Point-in-time or |
| restore | | restore | | transaction log |
| (Phase 2A) | | (Phase 2B) | | replay (Phase 2C) |
+--------+---------+ +---------+----------+ +---------+----------+
| | |
| v |
| +--------------------+ |
| | Immutable backup | |
| | required? | |
| +---------+----------+ |
| | |
| +---------+---------+ |
| | | |
| v v |
| +----------------+ +----------------+ |
| | Yes: Air-gap | | No: Standard | |
| | access | | restore | |
| | (Phase 2B-I) | | (Phase 2B-S) | |
| +----------------+ +----------------+ |
| |
+------------------------+-------------------------+
|
v
+------------------------+
| Validation and handoff |
| (Phase 3) |
+------------------------+

Figure 1: Recovery type decision flow

Phase 1: Assessment and preparation

Objective: Confirm recovery requirements, verify backup availability, and prepare recovery environment Timeframe: 15-60 minutes

  1. Document the recovery request with specific details:

    • What data or systems require recovery
    • When data loss or corruption occurred (if known)
    • Target recovery point (most recent, specific date/time, or pre-incident)
    • Requester identity and authorisation level
    • Business impact and urgency
  2. Verify authorisation for the recovery. File-level restores for a user’s own data require that user’s request or their manager’s approval. System restores require IT management approval. Restores affecting shared data require data owner approval.

  3. Identify the recovery point objective. Determine the latest acceptable data loss:

Recovery scenarios and typical RPO:
Accidental deletion (known time): Restore from backup immediately before deletion
Corruption (unknown onset): Restore from last known-good backup, verify integrity
Ransomware: Restore from immutable backup before infection date
Hardware failure: Restore from most recent backup
Migration rollback: Restore from pre-migration backup
  1. Query the backup catalogue to confirm backup availability for the target recovery point:
Terminal window
# Restic example: list snapshots for specific path
restic snapshots --path /data/finance --json | jq '.[] | {time, id, paths}'
# Veeam PowerShell: find restore points
Get-VBRRestorePoint -Backup "FileServer-Daily" |
Where-Object {$_.CreationTime -gt (Get-Date).AddDays(-7)} |
Select-Object Name, CreationTime, Type
# Commvault: list backup jobs
qlist backup -c FileServerSubclient -last 10

Record the backup job ID, timestamp, and storage location for the selected restore point.

  1. Verify backup integrity before restoration. Never restore from an unverified backup:
Terminal window
# Restic: verify snapshot integrity
restic check --read-data-subset=5%
# Borg: verify archive
borg check --verify-data repository::archive-name
# For tape-based backups, verify media readability
mt -f /dev/st0 status

If verification fails, select an alternative restore point and re-verify.

  1. Prepare the recovery target environment:

    • For file restores: confirm destination has sufficient space (restored size plus 20% overhead)
    • For system restores: provision target VM or physical hardware matching or exceeding original specifications
    • For database restores: ensure database engine version compatibility and adequate storage
  2. Notify affected users that recovery is beginning. Provide estimated completion time based on data volume:

    Data volumeEstimated restore time (disk-based backup)Estimated restore time (tape/cloud)
    Under 10 GB5-15 minutes15-45 minutes
    10-100 GB15-60 minutes1-4 hours
    100 GB - 1 TB1-4 hours4-12 hours
    1-10 TB4-24 hours1-3 days
    Over 10 TB1-7 days3-14 days

Decision point: If the required backup is unavailable or corrupted, escalate to IT management. Recovery may require forensic data recovery services or acceptance of data loss.

Checkpoint: Before proceeding, confirm:

  • Authorisation documented
  • Backup existence and integrity verified
  • Recovery target prepared with adequate resources
  • Users notified with timeline

Phase 2A: File-level restore

Objective: Recover specific files or folders to original or alternate location Timeframe: 5 minutes to 4 hours depending on volume

  1. Mount or browse the backup containing the target files:
Terminal window
# Restic: mount backup for browsing
restic mount --snapshot abc123 /mnt/restore
# Borg: mount archive
borg mount repository::archive-2024-01-15 /mnt/restore
# Duplicati: use web interface to browse backup contents
# Navigate to Restore > select backup > browse files

For agent-based backup systems (Veeam, Commvault, Rubrik), use the management console to initiate file-level recovery and browse the backup catalogue.

  1. Locate the specific files requiring recovery. Verify you have identified the correct versions by checking file timestamps and sizes against expected values.

  2. Determine restore destination:

    • Original location: Overwrites current files. Use when current files are corrupted or deleted and users need immediate access.
    • Alternate location: Restores to staging area. Use when comparison with current files is needed or when original location has insufficient space.
Terminal window
# Restore to original location
restic restore abc123 --target / --include /data/finance/reports/
# Restore to alternate location
restic restore abc123 --target /restore/staging --include /data/finance/reports/
  1. Execute the restore operation. Monitor progress and record completion time:
Terminal window
# Restic with progress
restic restore abc123 --target /restore/staging --verbose
# rsync from mounted backup with progress
rsync -avh --progress /mnt/restore/data/finance/ /restore/staging/finance/
  1. Verify restored file integrity:
Terminal window
# Compare file counts
find /restore/staging -type f | wc -l
# Verify checksums if original checksums are available
sha256sum -c /data/checksums/finance-checksums.txt
# Spot-check file contents (open representative files, verify readability)
head -100 /restore/staging/finance/reports/q4-2024.xlsx | file -
  1. Move restored files to final location if using alternate restore:
Terminal window
# Move with preserved permissions
rsync -avh --remove-source-files /restore/staging/finance/ /data/finance/
  1. Adjust permissions if necessary. Restored files inherit permissions from backup, which may not match current requirements:
Terminal window
# Restore original ownership
chown -R finance-svc:finance-team /data/finance/reports/
# Apply standard permissions
chmod -R 750 /data/finance/reports/

Checkpoint: Files restored to correct location, integrity verified, permissions appropriate, users can access files.

Phase 2B: Full system restore

Objective: Recover complete operating system, applications, and data to operational state Timeframe: 2-24 hours depending on system size and backup location

Full system restoration applies when the operating system is unrecoverable, hardware has failed, or a complete rollback to a previous state is required. This procedure assumes image-based or block-level backups exist.

Phase 2B-S: Standard full restore

Use standard restore when recovering from hardware failure, corruption, or migration rollback where no security compromise occurred.

  1. Provision the recovery target:

    For virtual machines:

- Create new VM matching original specifications
- CPU: equal or greater core count
- Memory: equal or greater allocation
- Storage: equal or greater capacity
- Network: same VLAN/subnet as original

For physical servers:

- Install replacement hardware or repair failed components
- Configure RAID controller to match original configuration
- Verify boot device is accessible
  1. Boot the recovery target from backup recovery media. Most backup solutions provide bootable ISO images:

    • Veeam: Boot from Veeam Recovery Media ISO
    • Acronis: Boot from Acronis Bootable Media
    • Restic/Borg: Boot from live Linux distribution with backup tools installed
    • Windows Server Backup: Boot from Windows installation media, select “Repair your computer”
  2. Connect to backup storage from the recovery environment:

Terminal window
# Mount network backup repository
mount -t nfs backup-server:/repository /mnt/backup
# Or mount CIFS/SMB share
mount -t cifs //backup-server/repository /mnt/backup -o user=backup-svc
# For cloud storage, configure credentials
export AWS_ACCESS_KEY_ID="key"
export AWS_SECRET_ACCESS_KEY="secret"
restic -r s3:s3.amazonaws.com/bucket-name snapshots
  1. Select the restore point identified in Phase 1 and begin restoration:
Terminal window
# Restic: restore entire system
restic restore abc123 --target /mnt/target-disk
# For image-based restore (Veeam, Acronis), use GUI:
# - Select backup file/repository
# - Choose restore point by date
# - Map disks (source disk -> target disk)
# - Begin restore
  1. Monitor restore progress. For large systems, expect:
Restore throughput estimates:
Local disk to local disk: 100-300 MB/s (1 TB in 1-3 hours)
Network (1 Gbps): 80-100 MB/s (1 TB in 3-4 hours)
Network (10 Gbps): 400-800 MB/s (1 TB in 20-45 min)
Cloud storage: 50-200 MB/s (1 TB in 2-6 hours)
Tape: 100-400 MB/s (1 TB in 1-3 hours)
  1. After restore completes, reconfigure bootloader if necessary:
Terminal window
# Linux: reinstall GRUB
grub-install /dev/sda
grub-mkconfig -o /boot/grub/grub.cfg
# Windows: repair boot configuration
bootrec /fixmbr
bootrec /fixboot
bootrec /rebuildbcd
  1. Boot the restored system. Do not connect to production network until verification is complete.

  2. Verify restored system state:

    • Operating system boots successfully
    • Core services start automatically
    • File systems mount correctly
    • Network configuration is appropriate (may need adjustment for new hardware)
    • Application services respond

Phase 2B-I: Immutable backup restore

Use immutable restore when recovering from ransomware, malicious deletion, or when standard backups may have been compromised. Immutable backups are write-once copies stored on media or in repositories that prevent modification or deletion for a defined retention period.

  1. Access the air-gapped or immutable backup repository. This requires physical access or out-of-band management depending on implementation:

    For air-gapped tape:

- Retrieve tapes from secure offsite storage
- Load tapes into isolated restore server (not connected to production network)
- Verify tape seals and chain of custody documentation

For immutable cloud storage (S3 Object Lock, Azure Immutable Blob):

Terminal window
# Verify object lock status
aws s3api get-object-retention --bucket backup-bucket --key backup-2024-01-15.tar
# Access requires privileged credentials stored separately from production
# Retrieve credentials from secure vault or break-glass procedure

For immutable disk repository (with WORM or retention lock):

Terminal window
# Connect to isolated management network
# Access repository through dedicated management interface
# Verify retention lock is active and backup predates incident
  1. Verify the selected backup predates the security incident by at least 24 hours. Check backup timestamps against incident timeline:
Terminal window
# List available immutable backups with timestamps
restic -r /mnt/immutable-repo snapshots --json |
jq '.[] | select(.time < "2024-01-14T00:00:00Z") | {id, time, paths}'
  1. Provision a clean recovery environment isolated from production:
Recovery environment requirements:
- Separate VLAN with no production network connectivity
- Freshly installed hypervisor or clean physical hardware
- No domain join until validation complete
- Antimalware with current definitions
- Network monitoring active
  1. Restore to the isolated environment following Phase 2B-S procedures.

  2. Before connecting to production, perform security validation:

Terminal window
# Scan restored files for known ransomware signatures
clamscan -r /mnt/restored-system --infected --log=/var/log/restore-scan.log
# Check for known ransomware file extensions
find /mnt/restored-system -type f \( -name "*.encrypted" -o -name "*.locked" \
-o -name "*.crypted" -o -name "*.crypt" \) -print
# Verify critical executable integrity against known-good hashes
sha256sum /mnt/restored-system/Windows/System32/cmd.exe
# Compare against published Microsoft hashes or pre-incident baseline
  1. If validation passes, coordinate with security team to reconnect restored system to production following containment confirmation.

Isolation requirement

Never restore from immutable backups directly to production infrastructure during a security incident. Restored systems must be validated in isolation first. Premature reconnection risks re-infection or spread of undetected malware.

Checkpoint: System restored, boots successfully, services operational, security validation passed (for immutable restore).

Phase 2C: Database recovery

Objective: Restore database to consistent state at specific point in time Timeframe: 30 minutes to 12 hours depending on database size and recovery type

Database recovery requires coordination between backup restoration and database engine recovery mechanisms. The procedure differs by database platform and recovery scenario.

+----------------------------------------------------------------------+
| DATABASE RECOVERY TIMELINE |
+----------------------------------------------------------------------+
| |
| Full Backup Transaction Logs Target Recovery |
| (Sunday 02:00) (Continuous) Point |
| | | | |
| v v v |
| +---------+ +-----------------------+ +-----------+ |
| | | | [ ][ ][ ][ ][ ][ ] | | | |
| | FULL | | LOG BACKUPS | | RESTORE | |
| | BACKUP | | (15 min interval) | | TARGET | |
| | | | [ ][ ][ ][ ][ ][ ] | | | |
| +----+----+ +-----------+-----------+ +-----------+ |
| | | |
| | +-- Tuesday 14:30 (Corruption detected) |
| | | |
| | +-- Tuesday 14:15 (Last valid log) |
| | | |
| | +-- Tuesday 14:00 (Target RPO) |
| | |
| +--------->-----------+ |
| | |
| [ PROCESS: Full Restore -> Log Replay -> Point-in-Time ] |
| |
+----------------------------------------------------------------------+

Figure 2: Point-in-time database recovery using full backup plus transaction logs

PostgreSQL recovery

  1. Stop the database service if running:
Terminal window
systemctl stop postgresql
  1. Clear or move the existing data directory:
Terminal window
mv /var/lib/postgresql/14/main /var/lib/postgresql/14/main.corrupted
mkdir /var/lib/postgresql/14/main
chown postgres:postgres /var/lib/postgresql/14/main
chmod 700 /var/lib/postgresql/14/main
  1. Restore the base backup:
Terminal window
# From pg_basebackup archive
tar -xzf /backup/base/base-2024-01-14.tar.gz -C /var/lib/postgresql/14/main
# Or from Restic/Borg
restic restore abc123 --target /var/lib/postgresql/14/main --include /var/lib/postgresql/14/main
  1. Configure recovery parameters for point-in-time recovery. Create /var/lib/postgresql/14/main/postgresql.auto.conf:
restore_command = 'cp /backup/wal/%f %p'
recovery_target_time = '2024-01-15 14:00:00+00'
recovery_target_action = 'promote'
  1. Create the recovery signal file:
Terminal window
touch /var/lib/postgresql/14/main/recovery.signal
chown postgres:postgres /var/lib/postgresql/14/main/recovery.signal
  1. Start PostgreSQL:
Terminal window
systemctl start postgresql

Monitor recovery progress in logs:

Terminal window
tail -f /var/log/postgresql/postgresql-14-main.log
  1. Verify recovery completion and data consistency:
-- Check recovery status
SELECT pg_is_in_recovery(); -- Should return 'f' after recovery completes
-- Verify timeline
SELECT timeline_id FROM pg_control_checkpoint();
-- Run application-specific consistency checks
SELECT COUNT(*) FROM critical_table;

SQL Server recovery

  1. Connect to SQL Server Management Studio or sqlcmd with sysadmin privileges.

  2. Identify the backup chain:

-- List backup history
SELECT
database_name,
backup_start_date,
backup_finish_date,
type,
backup_size / 1024 / 1024 AS size_mb
FROM msdb.dbo.backupset
WHERE database_name = 'ProductionDB'
AND backup_start_date > DATEADD(day, -7, GETDATE())
ORDER BY backup_start_date DESC;
  1. Restore the full backup with NORECOVERY to allow subsequent log restores:
RESTORE DATABASE ProductionDB
FROM DISK = 'D:\Backup\ProductionDB_Full_20240114.bak'
WITH NORECOVERY,
MOVE 'ProductionDB' TO 'E:\Data\ProductionDB.mdf',
MOVE 'ProductionDB_log' TO 'F:\Log\ProductionDB_log.ldf',
REPLACE;
  1. Restore differential backup if available:
RESTORE DATABASE ProductionDB
FROM DISK = 'D:\Backup\ProductionDB_Diff_20240115.bak'
WITH NORECOVERY;
  1. Restore transaction logs up to target recovery time:
-- Restore all logs except the last
RESTORE LOG ProductionDB
FROM DISK = 'D:\Backup\ProductionDB_Log_20240115_1200.trn'
WITH NORECOVERY;
RESTORE LOG ProductionDB
FROM DISK = 'D:\Backup\ProductionDB_Log_20240115_1215.trn'
WITH NORECOVERY;
-- Restore final log with point-in-time target
RESTORE LOG ProductionDB
FROM DISK = 'D:\Backup\ProductionDB_Log_20240115_1400.trn'
WITH RECOVERY,
STOPAT = '2024-01-15 14:00:00';
  1. Verify database state:
-- Check database is online
SELECT name, state_desc FROM sys.databases WHERE name = 'ProductionDB';
-- Run consistency check
DBCC CHECKDB ('ProductionDB') WITH NO_INFOMSGS;

MySQL/MariaDB recovery

  1. Stop the database service:
Terminal window
systemctl stop mariadb
  1. Clear the data directory:
Terminal window
mv /var/lib/mysql /var/lib/mysql.corrupted
mkdir /var/lib/mysql
chown mysql:mysql /var/lib/mysql
  1. Restore from physical backup (if using Mariabackup or Percona XtraBackup):
Terminal window
# Prepare the backup
mariabackup --prepare --target-dir=/backup/full-2024-01-14
# Restore
mariabackup --copy-back --target-dir=/backup/full-2024-01-14
chown -R mysql:mysql /var/lib/mysql
  1. For point-in-time recovery, apply binary logs after restoration:
Terminal window
# Start MySQL to apply logs
systemctl start mariadb
# Apply binary logs up to target time
mysqlbinlog --stop-datetime="2024-01-15 14:00:00" \
/backup/binlog/mariadb-bin.000042 \
/backup/binlog/mariadb-bin.000043 | mysql -u root -p
  1. Verify recovery:
-- Check tables
CHECK TABLE critical_table;
-- Verify row counts match expectations
SELECT COUNT(*) FROM critical_table;

Checkpoint: Database online, consistency checks pass, application can connect, data matches expected state for recovery point.

Phase 3: Validation and handoff

Objective: Confirm recovery success and return systems to production operation Timeframe: 30 minutes to 4 hours

  1. Execute the validation checklist appropriate to recovery type:

    File recovery validation:

[ ] File count matches expected
[ ] File sizes are correct
[ ] Files are readable (spot check 5-10 files)
[ ] Checksums match (if baseline available)
[ ] Permissions allow appropriate access
[ ] Owner/requester confirms content is correct

System recovery validation:

[ ] System boots without errors
[ ] All file systems mount correctly
[ ] Network connectivity established
[ ] Authentication services functional
[ ] Core application services running
[ ] Application responds to health checks
[ ] Database connectivity verified
[ ] Scheduled tasks configured and enabled
[ ] Monitoring agent reporting
[ ] Antimalware running with current definitions

Database recovery validation:

[ ] Database online and accepting connections
[ ] DBCC CHECKDB or equivalent passes
[ ] Application can connect and query
[ ] Transaction counts match expectations
[ ] Foreign key relationships intact
[ ] Indexes present and valid
[ ] Replication/mirroring re-established (if applicable)
  1. Document the recovery in the backup log:
Recovery Record
---------------
Date/time completed: 2024-01-15 16:45:00 UTC
Recovery lead: J. Smith
Request details:
- Requester: Finance team manager
- Authorisation: IT Manager approval #IR-2024-0142
- Reason: Accidental deletion of Q4 reports folder
Recovery details:
- Source backup: FileServer-Daily-2024-01-14-0200
- Backup timestamp: 2024-01-14 02:15:00 UTC
- Data volume: 2.4 GB (847 files)
- Restore duration: 12 minutes
- Target location: /data/finance/reports/ (original location)
Validation:
- File count verified: 847 files
- Requester confirmed data completeness
- No corruption detected
Notes:
- None
  1. Notify stakeholders of completion:
Subject: Recovery Complete - Finance Reports Data
The data recovery requested for the Finance Reports folder has been
completed successfully.
Summary:
- Data restored from backup dated 2024-01-14 02:15 UTC
- 847 files (2.4 GB) recovered to original location
- Finance team manager has verified data completeness
The data is now available at the original location:
\\fileserver\finance\reports\
If you discover any issues with the recovered data, please contact
the IT Service Desk immediately.
Reference: IR-2024-0142
  1. Update monitoring and alerting:

    • Confirm system is reporting to monitoring platform
    • Verify backup jobs are scheduled and will execute on schedule
    • Re-enable any alerts that were suppressed during recovery
  2. For system or database recovery, schedule follow-up backup:

    • Force immediate backup after recovery to establish new baseline
    • Verify backup completes successfully

Checkpoint: Recovery documented, stakeholders notified, monitoring active, backup schedule confirmed.

Communications

Update stakeholders at defined intervals throughout recovery operations.

StakeholderInitial notificationUpdate frequencyFinal notification
Affected usersWithin 15 minutes of startingEvery 2 hours for extended recoveriesOn completion
IT managementImmediately for system recoveryOn status change or every 4 hoursOn completion with summary
Application ownersWithin 30 minutesOn status changeOn completion with validation results
Executive leadershipFor outages over 4 hoursDailyOn completion

Initial notification template

Subject: Data Recovery in Progress - [System/Data Name]
A data recovery operation is currently in progress for [system/data name].
Started: [time]
Expected completion: [time estimate]
Reason: [brief description]
Impact during recovery:
- [List any service interruptions or limitations]
We will provide updates every [interval] hours.
For questions, contact the IT Service Desk.
Reference: [ticket number]

Completion notification template

Subject: Recovery Complete - [System/Data Name]
The recovery operation for [system/data name] has completed successfully.
Summary:
- Recovery point: [backup date/time]
- Duration: [time]
- Data recovered: [volume or description]
The [system/data] is now available for normal use.
[Any follow-up actions required by users]
Reference: [ticket number]

Evidence preservation

Maintain records of all recovery operations for audit and compliance purposes.

  1. Preserve the following for each recovery operation:

    • Original recovery request (email, ticket, or documented verbal request)
    • Authorisation record
    • Backup selection documentation (which backup, why selected)
    • Integrity verification results
    • Recovery command output or job logs
    • Validation checklist (completed)
    • Stakeholder notification records
  2. Retain recovery records according to data retention policy, typically:

    • Standard file recovery: 12 months
    • System recovery: 36 months
    • Recovery following security incident: 7 years
  3. Store recovery documentation in:

    • IT service management system (ticket attachment)
    • Backup operations log (central record)
    • Security incident record (if applicable)

SaaS and cloud application recovery

Recovery from cloud-hosted applications follows provider-specific procedures rather than infrastructure restore.

Microsoft 365 recovery

Microsoft 365 provides native retention and recovery capabilities. For deleted content within retention periods:

SharePoint/OneDrive file recovery:
1. Navigate to site or OneDrive recycle bin
2. Select items to restore
3. Click "Restore" to return to original location
Retention periods:
- First-stage recycle bin: 93 days
- Second-stage recycle bin: 93 days total (admin access required)
- Recoverable Items folder (Exchange): 14-30 days (configurable)

For recovery beyond retention periods or tenant-level disasters, use third-party Microsoft 365 backup solutions (Veeam for Microsoft 365, Commvault, Rubrik) following those products’ restore procedures.

Google Workspace recovery

Drive file recovery:
1. Navigate to drive.google.com/drive/trash
2. Select items to restore
3. Right-click > Restore
Admin recovery (for deleted users):
1. Admin console > Directory > Users > deleted users
2. Select user > Restore
3. Data restores to restored user account
Retention:
- User trash: 25 days
- Admin trash: 25 days after user trash expiry
- Vault retention: per retention policy (can be indefinite)

General SaaS recovery

For other SaaS applications:

  1. Check application’s native backup/export capabilities
  2. Check recycle bin or trash functionality
  3. Review audit logs for deletion events
  4. Contact vendor support for recovery options
  5. Restore from third-party SaaS backup if deployed

Troubleshooting

SymptomCauseResolution
Backup not found for requested dateBackup failed on that date, or retention expiredQuery backup logs to find nearest available backup; may need to accept different recovery point
Backup integrity check failsBackup corrupted during storage or media failureSelect alternative restore point; if multiple backups corrupted, investigate storage infrastructure
Restore runs but database won’t startTransaction logs incomplete or corruptRestore to point before log corruption; accept data loss from missing transactions
Restored files have wrong permissionsBackup agent ran as different user or ACLs changedManually reset permissions; document expected permissions for future recoveries
Full system restore boots but services failConfiguration drift since backup; dependencies missingCompare current and backup configurations; restore missing dependencies
Database consistency check fails after restoreIncomplete restore or corruption in backupRe-attempt restore; if repeated failures, restore from different backup and manually replay critical transactions
Restore completes but users report missing dataWrong backup selected; data not in backup scopeVerify backup contents before restore; check if data was in excluded paths
Network-based restore extremely slowNetwork congestion or misconfigurationVerify network path throughput; consider restoring to local staging then copying
Immutable backup inaccessibleCredentials expired or air-gap access procedure unclearFollow break-glass procedure for immutable backup access credentials
Cloud restore quota exceededProvider limits on restore bandwidth or operationsStagger restore operations; contact provider for quota increase
Restored VM won’t boot in different hypervisorHardware abstraction layer incompatibilityUse P2V/V2V conversion tools; reconfigure drivers post-restore
Tape read errors during restoreMedia degradation or drive alignmentTry alternate tape drive; if repeated failures, tape may be unrecoverable

See also