Skip to main content

System Decommissioning

System decommissioning retires technology components that no longer serve organisational needs, ensuring data preservation, dependency resolution, and resource recovery. This task applies when replacing legacy systems, closing projects, consolidating infrastructure, or eliminating redundant services. Successful decommissioning prevents orphaned data, broken integrations, wasted licence spend, and security exposure from unmonitored systems.

Prerequisites

RequirementDetail
ApprovalWritten decommission approval from system owner and IT leadership
ReplacementSuccessor system operational or explicit decision to discontinue function
TimelineMinimum 30 days for standard systems, 90 days for business-critical systems
AccessAdministrative access to target system, identity provider, monitoring tools, and CMDB
DocumentationCurrent system documentation including architecture diagrams and integration inventory
BackupVerified backup completed within 48 hours of decommission start
CommunicationNotification template approved by communications team

Confirm the decommission window does not conflict with critical business periods such as financial year-end, audit cycles, or emergency response activations. Verify that replacement systems have completed user acceptance testing and have been in production for at least 14 days without critical incidents before proceeding.

Regulatory holds

Before proceeding, verify with legal and compliance teams that no litigation holds, regulatory investigations, or audit requirements mandate data preservation beyond standard retention. Systems under legal hold cannot be decommissioned until the hold is released.

Procedure

Phase 1: Discovery and planning

  1. Extract the complete dependency inventory from configuration management and monitoring systems. Query your CMDB for all configuration items with relationships to the target system:
SELECT ci.name, ci.type, rel.relationship_type, rel.direction
FROM configuration_items ci
JOIN ci_relationships rel ON ci.id = rel.related_ci_id
WHERE rel.primary_ci_id = '[TARGET_SYSTEM_ID]'
ORDER BY rel.relationship_type;

Document each dependency in the decommission register with the owning team, impact assessment, and migration status.

  1. Identify all integration points by analysing network traffic, API logs, and authentication records for the 90-day period preceding the decommission decision. Export firewall logs showing connections to and from the system:
Terminal window
# Extract unique source/destination pairs for target system IP
grep "192.0.2.50" /var/log/firewall/connections.log | \
awk '{print $3, $5}' | sort | uniq -c | sort -rn > integration_analysis.txt

Cross-reference discovered connections against the documented integration inventory. Undocumented connections indicate shadow integrations requiring investigation before proceeding.

  1. Compile the user population by extracting authentication records from the identity provider. For systems using SAML or OIDC federation, query the identity provider’s application assignment:
Terminal window
# Azure AD example: export users assigned to application
az ad app show --id [APP_ID] --query "assignedUsers[].userPrincipalName" -o tsv
# Keycloak example: list users with role assignment
kcadm.sh get-roles -r [REALM] --cid [CLIENT_ID] --available

Classify users as active (authenticated within 30 days), dormant (31-90 days), or inactive (over 90 days). Active users require direct notification and transition support.

  1. Calculate the data disposition requirements by inventorying all data stores associated with the system. For database systems, extract schema and volume information:
SELECT table_schema, table_name,
ROUND(data_length/1024/1024, 2) AS data_mb,
ROUND(index_length/1024/1024, 2) AS index_mb
FROM information_schema.tables
WHERE table_schema = '[DATABASE_NAME]'
ORDER BY data_length DESC;

Map each data store against retention requirements from the data retention schedule. Data exceeding retention periods requires secure deletion; data within retention periods requires archiving.

  1. Create the decommission project plan with explicit milestones, responsible parties, and rollback criteria. The plan must include:

    PhaseDurationExit criteria
    Discovery5-10 daysAll dependencies documented, users identified
    Notification14-30 daysAll stakeholders notified, objections resolved
    Migration7-60 daysAll active users transitioned, data archived
    Disconnection1-3 daysAll integrations disconnected, access revoked
    Shutdown1 daySystem offline, services stopped
    Disposition7-30 daysHardware disposed, licences recovered

Phase 2: Stakeholder notification

  1. Send initial notification to all identified stakeholders 30 days before the planned shutdown date for standard systems, or 90 days for business-critical systems. The notification must include:
Subject: System Decommissioning Notice - [SYSTEM_NAME]
[SYSTEM_NAME] will be decommissioned on [DATE].
Impact: [Specific functions being retired]
Replacement: [Successor system name and access instructions]
Data: [What happens to existing data]
Action required: [Specific steps users must take by deadline]
Support: [Contact for questions and assistance]
Timeline:
- [DATE-30]: Last day for new data entry
- [DATE-14]: Migration assistance deadline
- [DATE-7]: Read-only access begins
- [DATE]: System offline
  1. Track notification acknowledgement by requiring active confirmation from system owners and key users. Create a tracking register:

    StakeholderRoleNotifiedAcknowledgedObjections
    [Name]System owner[Date][Date/Pending][None/Detail]
    [Name]Integration owner[Date][Date/Pending][None/Detail]

    Escalate unacknowledged notifications after 7 days. Unresolved objections require review by the IT governance board before proceeding.

  2. Conduct transition support sessions for active user populations exceeding 20 users. Schedule training on the replacement system and provide documentation covering function mapping between old and new systems. Record attendance and provide materials to absent users within 48 hours.

  3. Send reminder notifications at 14 days, 7 days, and 24 hours before shutdown. Each reminder should include the countdown to shutdown and repeat the action requirements and support contacts.

Phase 3: Data preservation

  1. Execute final data backup using the organisation’s standard backup tooling. Verify backup completion and integrity:
Terminal window
# Restic backup with verification
restic -r /backup/repository backup /data/[system] --tag decommission-final
restic -r /backup/repository check --read-data-subset=10%
# Record snapshot ID for future reference
restic -r /backup/repository snapshots --tag decommission-final --json > backup_manifest.json

Store the backup manifest in the decommission documentation package.

  1. Archive data requiring long-term retention according to the data retention schedule. Export data to the designated archive storage with appropriate metadata:
Terminal window
# Database export for archival
pg_dump -Fc --file=[system]_archive_$(date +%Y%m%d).dump [database]
# Calculate checksum for integrity verification
sha256sum [system]_archive_*.dump > archive_checksums.txt
# Transfer to archive storage with retention metadata
aws s3 cp [system]_archive_*.dump s3://archive-bucket/[system]/ \
--metadata retention-years=7,classification=internal,decommission-date=$(date +%Y-%m-%d)
  1. Execute secure deletion for data exceeding retention requirements. Use cryptographic erasure for encrypted storage or multi-pass overwrite for unencrypted storage:
Terminal window
# Cryptographic erasure: destroy encryption keys
# (Ensure backup keys are destroyed from key management system)
# Multi-pass overwrite for unencrypted data (3-pass minimum)
shred -vfz -n 3 /data/[system]/expired_data/*

Document deletion with timestamps and method used. Obtain sign-off from data protection officer for personal data deletion.

  1. Generate the data disposition certificate documenting what data was archived, what was deleted, and what remains in backups. This certificate becomes part of the permanent decommission record:
DATA DISPOSITION CERTIFICATE
System: [SYSTEM_NAME]
Date: [DATE]
Archived:
- [Database/dataset]: [Volume], [Location], [Retention until]
Deleted:
- [Data category]: [Volume], [Method], [Date]
Retained in backup:
- [Backup ID]: [Location], [Retention until]
Certified by: [Name, Role]

Phase 4: Integration disconnection

  1. Disable inbound integrations by removing the system from load balancers, API gateways, and DNS records. For API endpoints, return informative error responses during the transition period rather than connection failures:
# Nginx configuration for decommissioned API
location /api/v1/[system] {
return 410 '{"error": "Service decommissioned",
"replacement": "https://new-system.example.org/api/v2/",
"documentation": "https://docs.example.org/migration/"}';
add_header Content-Type application/json;
}
  1. Disconnect outbound integrations by removing API credentials and service accounts. Revoke API keys from the target system:
Terminal window
# Revoke API credentials in secrets manager
aws secretsmanager delete-secret --secret-id [system]/api-credentials \
--recovery-window-in-days 7
# Disable service account in identity provider
az ad sp update --id [SERVICE_PRINCIPAL_ID] --set accountEnabled=false

Notify integration partners 48 hours before disconnection with reference to replacement endpoints.

  1. Remove database connections by rotating credentials and updating firewall rules. Change database passwords even for systems being shut down to prevent reconnection during the disposition period:
-- Rotate all application credentials
ALTER USER [app_user] WITH PASSWORD '[new_random_password]';
-- Revoke connection privileges
REVOKE CONNECT ON DATABASE [database] FROM [app_user];
  1. Update service mesh and service discovery registrations to remove the decommissioned system:
Terminal window
# Consul service deregistration
consul services deregister -id=[system]-service
# Kubernetes service removal
kubectl delete service [system]-svc -n [namespace]
  1. Document each disconnection in the integration register with timestamp, method, and verification of successful disconnection. Failed connections from dependent systems should trigger alerts but no longer succeed.

The following diagram illustrates the integration disconnection sequence and verification points:

+----------------------------------------------------------------+
| INTEGRATION DISCONNECTION FLOW |
+----------------------------------------------------------------+
|
v
+-------------------+
| Identify active |
| integrations |
+--------+----------+
|
+----------------+----------------+
| | |
v v v
+--------+------+ +------+-------+ +-----+--------+
| Inbound APIs | | Outbound | | Database |
| & webhooks | | connections | | links |
+--------+------+ +------+-------+ +-----+--------+
| | |
v v v
+--------+------+ +------+-------+ +-----+--------+
| Configure | | Revoke API | | Rotate |
| 410 responses | | credentials | | credentials |
+--------+------+ +------+-------+ +-----+--------+
| | |
v v v
+--------+------+ +------+-------+ +-----+--------+
| Remove DNS | | Disable | | Update |
| & LB entries | | service acct | | firewall |
+--------+------+ +------+-------+ +-----+--------+
| | |
+----------------+----------------+
|
v
+-------------------+
| Verify no active |
| connections |
+-------------------+
|
v
+-------------------+
| Document in |
| integration |
| register |
+-------------------+

Figure 1: Integration disconnection sequence with verification checkpoint

Phase 5: Access removal

  1. Disable user authentication by removing the application registration from the identity provider. This immediately prevents new logins while preserving audit history:
Terminal window
# Azure AD: disable application
az ad app update --id [APP_ID] --set signInAudience="None"
# Keycloak: disable client
kcadm.sh update clients/[CLIENT_ID] -r [REALM] -s enabled=false
# Okta: deactivate application
curl -X POST "https://[org].okta.com/api/v1/apps/[APP_ID]/lifecycle/deactivate" \
-H "Authorization: SSWS [API_TOKEN]"
  1. Terminate active sessions to force immediate logoff for any remaining connected users:
Terminal window
# Redis session store: delete all sessions for application
redis-cli KEYS "session:[system]:*" | xargs redis-cli DEL
# Database session store
DELETE FROM user_sessions WHERE application_id = '[SYSTEM_ID]';
  1. Revoke local accounts that exist independently of the identity provider. For systems with local user databases, disable accounts rather than deleting to preserve audit trails:
-- Disable all local accounts
UPDATE users SET
status = 'disabled',
disabled_date = CURRENT_TIMESTAMP,
disabled_reason = 'System decommissioned'
WHERE status = 'active';
  1. Remove group memberships and role assignments in the identity provider that granted access to the decommissioned system:
Terminal window
# Remove application role assignments
az ad app delete --id [APP_ID]
# Clean up associated groups if single-purpose
az ad group delete --group [SYSTEM_ACCESS_GROUP]
  1. Document access removal with before and after snapshots of access configurations for audit purposes.

Phase 6: System shutdown

  1. Stop application services gracefully, allowing in-flight requests to complete:
Terminal window
# Systemd service stop with timeout
systemctl stop [system].service --timeout=300
# Container orchestration scale-down
kubectl scale deployment [system] --replicas=0 -n [namespace]
# Verify processes terminated
pgrep -f [system] && echo "WARNING: Processes still running"
  1. Stop supporting services in dependency order. Shut down application servers before database servers, caches before applications:
Terminal window
# Stop in reverse dependency order
systemctl stop [system]-app.service
systemctl stop [system]-cache.service
systemctl stop [system]-db.service
  1. Disable automatic restart by removing the system from orchestration and automation:
Terminal window
# Disable systemd service
systemctl disable [system].service
# Remove from container orchestration
kubectl delete deployment [system] -n [namespace]
kubectl delete configmap [system]-config -n [namespace]
kubectl delete secret [system]-secrets -n [namespace]
# Remove from configuration management
# (Puppet/Chef/Ansible: remove node classification or inventory entry)
  1. Capture final system state including running configuration, installed packages, and resource utilisation:
Terminal window
# Export final configuration state
mkdir -p /decommission/[system]/final-state
# Installed packages
dpkg --get-selections > /decommission/[system]/final-state/packages.txt
# Running configuration
cp -r /etc/[system] /decommission/[system]/final-state/config/
# Resource utilisation snapshot
df -h > /decommission/[system]/final-state/disk.txt
free -h > /decommission/[system]/final-state/memory.txt
  1. Remove monitoring and alerting configurations to prevent false alerts from the offline system:
Terminal window
# Remove from Prometheus/Grafana
kubectl delete servicemonitor [system]-metrics -n monitoring
# Remove from alerting
# Delete or comment out alert rules referencing the system
# Remove from uptime monitoring
curl -X DELETE "https://monitoring.example.org/api/checks/[CHECK_ID]"
  1. Update CMDB status to decommissioned with shutdown timestamp:
UPDATE configuration_items
SET status = 'decommissioned',
decommission_date = CURRENT_TIMESTAMP,
decommission_ticket = '[TICKET_ID]'
WHERE ci_id = '[SYSTEM_CI_ID]';

Phase 7: Hardware and resource disposition

  1. Determine hardware disposition path based on asset age, condition, and organisational policy:

    Asset ageConditionDisposition
    Under 3 yearsFunctionalRedeploy internally
    3-5 yearsFunctionalDonate or sell
    Over 5 yearsAnyRecycle
    AnyNon-functionalRecycle
  2. Perform secure media sanitisation before any disposition path. For storage media containing data classified as Internal or higher, use cryptographic erasure or physical destruction:

Terminal window
# NVMe secure erase
nvme format /dev/nvme0n1 --ses=1
# SATA secure erase
hdparm --user-master u --security-set-pass PasijD38 /dev/sda
hdparm --user-master u --security-erase PasSjD38 /dev/sda
# Verification: attempt to recover data (should find nothing)
foremost -t all -i /dev/sda -o /tmp/recovery-test

For highly sensitive data or when cryptographic erasure cannot be verified, engage certified destruction vendor and obtain destruction certificates.

  1. Recover software licences by deactivating licence keys and updating the licence management system:
Terminal window
# Document licence recovery
echo "[DATE] [SYSTEM] - [LICENCE_TYPE] - [KEY] - Recovered" >> licence_recovery.log
# Update licence management system
curl -X POST "https://licence-mgmt.example.org/api/licences/[ID]/deactivate" \
-H "Authorization: Bearer [TOKEN]" \
-d '{"reason": "system_decommission", "ticket": "[TICKET_ID]"}'
  1. Release cloud resources and terminate billing:
Terminal window
# AWS resource cleanup
aws ec2 terminate-instances --instance-ids [INSTANCE_IDS]
aws rds delete-db-instance --db-instance-identifier [DB_ID] --skip-final-snapshot
aws s3 rb s3://[bucket-name] --force
# Azure resource cleanup
az group delete --name [RESOURCE_GROUP] --yes --no-wait
# Verify billing stops (check after next billing cycle)
  1. Update asset register with disposition details:
UPDATE assets
SET status = 'disposed',
disposition_date = CURRENT_TIMESTAMP,
disposition_method = '[redeployed/donated/recycled/destroyed]',
disposition_reference = '[ticket/certificate ID]'
WHERE system_id = '[SYSTEM_ID]';

Phase 8: Documentation and closure

  1. Compile the decommission package containing all documentation generated during the process:
/decommission/[system]/
├── approval/
│ ├── decommission_request.pdf
│ └── approval_confirmation.pdf
├── discovery/
│ ├── dependency_inventory.csv
│ ├── user_population.csv
│ └── integration_analysis.txt
├── notifications/
│ ├── stakeholder_tracking.csv
│ └── notification_templates/
├── data/
│ ├── disposition_certificate.pdf
│ ├── archive_manifest.json
│ └── deletion_log.txt
├── technical/
│ ├── integration_disconnection_log.csv
│ ├── access_removal_log.csv
│ └── final-state/
├── disposition/
│ ├── hardware_disposition.csv
│ ├── sanitisation_certificates/
│ └── licence_recovery.log
└── closure/
├── lessons_learned.md
└── signoff.pdf
  1. Conduct lessons learned review within 14 days of final disposition, documenting process improvements for future decommissions:

    CategoryFindingRecommendation
    Discovery[What was missed][How to prevent]
    Timeline[What took longer][Better estimates]
    Communication[What caused confusion][Template improvements]
    Technical[What failed][Process changes]
  2. Obtain closure sign-off from the system owner, data owner, and IT operations lead confirming all decommission activities completed satisfactorily.

  3. Archive the decommission package according to the documentation retention schedule. Standard retention for decommission records is 7 years from disposition date.

  4. Close the decommission project ticket with summary of activities, final costs, and lessons learned reference.

Verification

Complete decommissioning requires verification across multiple dimensions:

System availability verification confirms the system is no longer accessible:

Terminal window
# DNS resolution should fail or return nothing
dig +short [system].example.org
# Expected: no response
# HTTP endpoints should return 410 or connection refused
curl -I https://[system].example.org
# Expected: 410 Gone or connection refused
# Database connections should fail
psql -h [db-host] -U [user] -d [database] -c "SELECT 1"
# Expected: connection refused or authentication failed

Integration verification confirms no active connections persist:

Terminal window
# Check firewall logs for connection attempts
grep "192.0.2.50" /var/log/firewall/connections.log | \
grep -v "DENIED" | tail -20
# Expected: no successful connections after shutdown
# Check API gateway logs
grep "[system]" /var/log/api-gateway/access.log | \
grep "200\|201\|204" | tail -20
# Expected: no successful requests after shutdown

Data verification confirms preservation and deletion:

Terminal window
# Verify archive integrity
sha256sum -c archive_checksums.txt
# Expected: all files OK
# Verify backup accessibility
restic -r /backup/repository snapshots --tag decommission-final
# Expected: snapshot present and accessible
# Verify deletion (attempt recovery on sanitised media)
foremost -t all -i /dev/[device] -o /tmp/verify
ls -la /tmp/verify/
# Expected: empty or minimal artefacts

Documentation verification confirms complete records:

DocumentLocationPresent
Decommission approval/decommission/[system]/approval/Yes/No
Dependency inventory/decommission/[system]/discovery/Yes/No
Data disposition certificate/decommission/[system]/data/Yes/No
Sanitisation certificate/decommission/[system]/disposition/Yes/No
Closure sign-off/decommission/[system]/closure/Yes/No

Troubleshooting

SymptomCauseResolution
Unknown integration discovered after shutdownShadow IT integration not in CMDBExtend disconnection phase; analyse traffic logs for full 90-day period; update CMDB with discovered integration; notify integration owner
Users report lost access to replacement systemUser migration incompleteCross-reference original user list against replacement system access; provision missing users; extend parallel running if migration still in progress
Dependent system errors after disconnectionIntegration not properly migratedReview error logs on dependent system; provide replacement endpoint configuration; consider temporary service restoration if critical
Data archive fails integrity checkStorage corruption or incomplete transferRe-transfer from backup; verify source backup integrity; if backup also corrupted, extend shutdown to attempt recovery
Hardware sanitisation verification failsIncomplete erasure or drive failureRe-run sanitisation; for persistent failures, engage physical destruction vendor
Licence key cannot be recoveredKey lost or vendor no longer activeDocument as unrecoverable; update licence inventory; consider as sunk cost in lessons learned
Monitoring alerts persist after shutdownIncomplete monitoring configuration removalAudit all monitoring systems (infrastructure, application, synthetic); remove residual configurations
Cloud costs continue after decommissionResources missed during cleanupAudit cloud accounts for orphaned resources (snapshots, IP addresses, load balancers); terminate discovered resources
Legal hold discovered after data deletionIncomplete pre-decommission checkImmediately notify legal; attempt data recovery from backups; document incident for compliance
System owner refuses closure sign-offConcerns about data preservation or functionalitySchedule review meeting; address specific concerns; escalate to governance board if unresolved after 14 days
Post-decommission data request receivedUser needs data not migratedRetrieve from archive or backup; assess whether archive indexing needs improvement for future requests
Vendor disputes licence recoveryLicence terms prohibit transfer or recoveryReview contract terms; engage procurement for clarification; document actual licence disposition
Replacement system performance issues blamed on decommissioningUser perception rather than technical causeDocument replacement system was validated before decommissioning; redirect to replacement system support
Audit finding for incomplete documentationDocumentation package missing required elementsReconstruct missing documentation from logs and system records; update decommission checklist to prevent recurrence

Rollback

System decommissioning includes limited rollback capability before final disposition. Rollback becomes progressively more difficult through each phase:

PhaseRollback complexityActions required
Discovery and planningTrivialCancel project
Stakeholder notificationLowSend cancellation notice
Data preservationLowNo rollback needed (preservation is non-destructive)
Integration disconnectionMediumRestore credentials, DNS, firewall rules
Access removalMediumRe-enable application in identity provider
System shutdownHighRestart services, re-enable monitoring, restore orchestration
Hardware dispositionImpossible if sanitised/destroyedRequires full rebuild

To rollback during phases 4-6, execute the restoration procedure:

Terminal window
# Restore DNS records
# (Restore from DNS backup or manually recreate)
# Restore firewall rules
# (Apply saved configuration)
# Re-enable identity provider application
az ad app update --id [APP_ID] --set signInAudience="AzureADMyOrg"
# Restart services
systemctl enable [system].service
systemctl start [system].service
# Re-enable monitoring
kubectl apply -f servicemonitor-[system].yaml
# Update CMDB
UPDATE configuration_items
SET status = 'active', decommission_date = NULL
WHERE ci_id = '[SYSTEM_CI_ID]';
# Notify stakeholders of rollback

Point of no return

Once hardware sanitisation begins in Phase 7, rollback requires complete system rebuild from backups. Ensure all stakeholders confirm readiness before proceeding past Phase 6.

The following diagram illustrates the decommissioning workflow with phase boundaries and rollback points:

+--------------------------------------------------------------------------+
| DECOMMISSION WORKFLOW |
+--------------------------------------------------------------------------+
START
|
v
+------------------+ +------------------+ +------------------+
| Phase 1 | | Phase 2 | | Phase 3 |
| DISCOVERY & |----> | STAKEHOLDER |----> | DATA |
| PLANNING | | NOTIFICATION | | PRESERVATION |
| | | | | |
| - Dependencies | | - 30/90 day | | - Final backup |
| - Integrations | | notice | | - Archive |
| - Users | | - Tracking | | - Secure delete |
| - Data inventory | | - Training | | - Certificate |
+--------+---------+ +--------+---------+ +--------+---------+
| | |
| Rollback: Trivial | Rollback: Low | Rollback: Low
| | |
+-------------------------+-------------------------+
|
v
+------------------+ +------------------+ +------------------+
| Phase 4 | | Phase 5 | | Phase 6 |
| INTEGRATION |----> | ACCESS |----> | SYSTEM |
| DISCONNECTION | | REMOVAL | | SHUTDOWN |
| | | | | |
| - API disable | | - IdP disable | | - Stop services |
| - Credential | | - Session term | | - Disable auto |
| revoke | | - Local account | | restart |
| - DB disconnect | | disable | | - Final state |
+--------+---------+ +--------+---------+ +--------+---------+
| | |
| Rollback: Medium | Rollback: Medium | Rollback: High
| | |
+-------------------------+-------------------------+
|
==========================+==========================
POINT OF NO RETURN BOUNDARY
=====================================================
|
v
+------------------+ +------------------+
| Phase 7 | | Phase 8 |
| HARDWARE & |----> | DOCUMENTATION |
| RESOURCE | | & CLOSURE |
| DISPOSITION | | |
| | | - Package |
| - Sanitisation | | compile |
| - Licence | | - Lessons |
| recovery | | learned |
| - Cloud cleanup | | - Sign-off |
+--------+---------+ +--------+---------+
| |
| Rollback: IMPOSSIBLE |
| v
| END
+

Figure 2: Decommission workflow showing phases, activities, and rollback complexity

See also