Skip to main content

Post-Implementation Review

A post-implementation review (PIR) evaluates a completed change against its original objectives, documents what occurred during implementation, and captures lessons that inform future changes. The review bridges the gap between change completion and organisational learning by creating structured opportunities to assess outcomes while implementation details remain fresh. PIRs apply to all changes that warrant evaluation, from major infrastructure deployments affecting hundreds of users to targeted application updates that introduced unexpected complexity.

The review produces three outputs: an assessment of whether the change achieved its intended outcomes, a record of issues encountered and how they were resolved, and actionable improvements for change management, release processes, or technical practices. These outputs feed directly into the continual improvement register and knowledge base, transforming individual change experiences into organisational capability.

Prerequisites

Before initiating a post-implementation review, verify the following conditions are met.

Change record access
Read access to the change management system containing the original change request, risk assessment, implementation plan, and any amendments approved during execution. The change record number and final status must be confirmed.
Implementation documentation
Access to implementation logs, deployment records, communication transcripts, and any incident tickets raised during or immediately after the change window. For changes using CI/CD pipelines, access to pipeline execution logs and artifact repositories.
Monitoring data
Performance metrics, availability data, and error rates for affected services covering the baseline period before the change, the implementation window, and the stabilisation period after. Minimum 7 days post-implementation data for standard changes, 14 days for major changes.
Stakeholder availability
Confirmed attendance from the change owner, technical implementers, service owner, and affected team representatives. For major changes, include CAB representative and any third-party vendors involved in implementation.
Success criteria documentation
The original success criteria defined in the change request, including specific metrics, thresholds, and verification methods. If criteria were modified during implementation, both original and revised criteria must be available.
Authority to document
Write access to the knowledge management system for creating or updating articles, and to the continual improvement register for logging improvement actions.

Verify change record accessibility before scheduling:

Terminal window
# Query change management system for record completeness
curl -s "https://itsm.example.org/api/v1/changes/CHG0012847" \
-H "Authorization: Bearer $ITSM_TOKEN" | \
jq '{id: .number, status: .state, has_plan: (.implementation_plan != null),
has_risk: (.risk_assessment != null), has_backout: (.backout_plan != null)}'

Expected output confirming complete record:

{
"id": "CHG0012847",
"status": "closed",
"has_plan": true,
"has_risk": true,
"has_backout": true
}

Procedure

Schedule the review

  1. Determine review timing based on change category. Standard changes require PIR within 5 working days of closure. Major changes require PIR within 10 working days. Emergency changes require PIR within 3 working days, prioritising rapid learning from urgent situations.

  2. Identify required participants using the change record. Extract the change owner, technical lead, and implementation team members. For changes affecting multiple services, include each service owner. A PIR for a major infrastructure change affecting email, file storage, and authentication services requires representation from all three service areas.

  3. Calculate the meeting duration based on change complexity. Allow 30 minutes for standard changes with no incidents. Allow 60 minutes for standard changes with incidents or deviations. Allow 90 minutes for major changes. Allow 120 minutes for failed changes requiring detailed analysis.

  4. Send calendar invitations with the following information: change number, change title, implementation date, meeting purpose, and pre-read materials location. Include a link to the PIR data collection form if using structured input.

Subject: PIR - CHG0012847 - Email Gateway Migration
Post-implementation review for the email gateway migration
completed on 2024-11-08.
Change: CHG0012847
Implementation window: 2024-11-08 22:00-02:00 UTC
Status: Completed with minor incidents
Pre-read materials: https://docs.example.org/pir/CHG0012847/
Please review the implementation summary and come prepared
to discuss your observations.
  1. Create the PIR document shell in the knowledge management system, linking it to the change record. This establishes the documentation location before the meeting and ensures the record exists even if the meeting is delayed.

Collect and prepare data

  1. Extract quantitative implementation metrics from monitoring systems. Gather availability percentages, error rates, response times, and transaction volumes for the affected services. Calculate the metrics for three periods: 7 days before implementation (baseline), the implementation window itself, and 7 days after implementation (stabilisation).
Terminal window
# Extract availability metrics for email gateway service
# Baseline period
curl -s "https://monitoring.example.org/api/v1/query_range" \
--data-urlencode "query=avg_over_time(up{service=\"email-gateway\"}[1d])" \
--data-urlencode "start=2024-11-01T00:00:00Z" \
--data-urlencode "end=2024-11-08T00:00:00Z" \
--data-urlencode "step=1d" | jq '.data.result[0].values'
# Post-implementation period
curl -s "https://monitoring.example.org/api/v1/query_range" \
--data-urlencode "query=avg_over_time(up{service=\"email-gateway\"}[1d])" \
--data-urlencode "start=2024-11-09T00:00:00Z" \
--data-urlencode "end=2024-11-16T00:00:00Z" \
--data-urlencode "step=1d" | jq '.data.result[0].values'
  1. Compile incident and problem records linked to the change. Query the service management system for any incidents raised during the implementation window or referencing the change number in the 14 days following implementation.
Terminal window
# Find incidents related to change
curl -s "https://itsm.example.org/api/v1/incidents" \
-H "Authorization: Bearer $ITSM_TOKEN" \
--data-urlencode "query=related_change=CHG0012847 OR \
(opened_at>=2024-11-08 AND opened_at<=2024-11-22 AND \
affected_service=email-gateway)" | \
jq '.result[] | {number, short_description, priority, resolved_at}'
  1. Gather qualitative feedback from implementation participants. Send a brief survey or structured questions to technical staff involved in the change, asking about plan accuracy, unexpected challenges, tool effectiveness, and communication quality. Allow 48 hours for responses before the PIR meeting.

  2. Prepare the implementation timeline showing planned versus actual activities. Document each step from the implementation plan alongside what actually occurred, noting start times, completion times, and any deviations.

    Planned activityPlanned timeActual timeVarianceNotes
    Begin maintenance window22:0022:000 minOn schedule
    Disable inbound mail flow22:0522:08+3 minDNS propagation slower than expected
    Database backup22:1522:12-3 minCompleted early
    Apply gateway configuration22:4523:15+30 minCertificate chain issue required troubleshooting
    Verification testing23:3000:05+35 minDelayed by configuration issue
    Enable inbound mail flow00:0000:45+45 minCumulative delay
    End maintenance window02:0001:30-30 minCompleted within window
  3. Assemble the data into a PIR preparation pack and distribute to participants 48 hours before the meeting. The pack includes: implementation timeline comparison, incident summary, monitoring metrics comparison, and feedback survey results.

The data collection process draws from multiple sources that together provide a complete implementation picture:

+-----------------------------------------------------------------------+
| PIR DATA SOURCES |
+-----------------------------------------------------------------------+
| |
| +------------------+ +------------------+ +-----------------+ |
| | Change Record | | Monitoring | | Incident | |
| | | | Systems | | Records | |
| | - Original plan | | | | | |
| | - Risk assessment| | - Availability | | - Related INC | |
| | - Approvals | | - Performance | | - Impact data | |
| | - Amendments | | - Error rates | | - Resolution | |
| +--------+---------+ +--------+---------+ +--------+--------+ |
| | | | |
| +-----------------------+-----------------------+ |
| | |
| +--------v---------+ |
| | | |
| | PIR Preparation | |
| | Pack | |
| | | |
| +--------+---------+ |
| | |
| +-----------------------+-----------------------+ |
| | | | |
| +--------v---------+ +--------v---------+ +--------v--------+ |
| | Implementation | | Participant | | Communication | |
| | Logs | | Feedback | | Records | |
| | | | | | | |
| | - Pipeline runs | | - Survey results | | - Bridge calls | |
| | - Console output | | - Observations | | - Email threads | |
| | - Timestamps | | - Suggestions | | - Chat logs | |
| +------------------+ +------------------+ +-----------------+ |
| |
+-----------------------------------------------------------------------+

Figure 1: Data sources feeding PIR preparation, combining quantitative metrics with qualitative feedback

Facilitate the review meeting

  1. Open the meeting by stating the change identifier, implementation date, and overall outcome. Confirm the meeting objective: to assess success, document lessons, and identify improvements. Remind participants that the review examines the change and process, not individual performance.

  2. Present the success criteria assessment. Walk through each criterion defined in the original change request, presenting the measured outcome and determining whether the criterion was met, partially met, or not met.

    For the email gateway migration example:

    Success criterionTargetMeasuredAssessment
    Mail delivery latencyUnder 30 seconds12 seconds averageMet
    Spam detection rateAbove 98%99.2%Met
    False positive rateBelow 0.1%0.08%Met
    User-reported issuesFewer than 10 in first week3 reportedMet
    Implementation within windowComplete by 02:00Complete at 01:30Met
  3. Review the implementation timeline, highlighting variances between planned and actual execution. For each significant variance (greater than 15 minutes or causing downstream delays), facilitate discussion on the cause and whether it was foreseeable.

  4. Discuss incidents and issues encountered during implementation. For each incident, capture: what happened, how it was detected, what action was taken, and what prevented earlier detection or avoidance. Do not conduct root cause analysis during the PIR; instead, note items requiring deeper investigation and assign them to Problem Management.

  5. Collect lessons learned using structured categories. Ask participants to identify what went well (to reinforce), what could improve (to change), and what surprised them (to investigate). Document each lesson with enough context for someone unfamiliar with the change to understand.

  6. Identify improvement actions arising from the lessons. Each action requires an owner, target date, and clear definition of done. Limit actions to those within the authority of meeting participants or their direct management chain; escalate broader organisational changes through the continual improvement register.

  7. Summarise the review outcome. State the overall assessment (successful, successful with issues, partially successful, failed), list key lessons, and confirm action owners and dates. Thank participants and confirm the timeline for report distribution.

Document lessons learned

Lessons require categorisation to enable pattern analysis across multiple reviews. The categorisation scheme distinguishes between lessons about the change itself and lessons about the change process:

+-----------------------------------------------------------------------+
| LESSONS LEARNED CATEGORIES |
+-----------------------------------------------------------------------+
| |
| CHANGE EXECUTION CHANGE PROCESS |
| +---------------------------+ +---------------------------+ |
| | | | | |
| | Technical | | Planning | |
| | - Architecture decisions | | - Scope definition | |
| | - Configuration choices | | - Timeline estimation | |
| | - Integration points | | - Resource allocation | |
| | - Testing coverage | | - Risk identification | |
| | | | | |
| +---------------------------+ +---------------------------+ |
| | | | | |
| | Operational | | Coordination | |
| | - Deployment sequence | | - Stakeholder engagement | |
| | - Monitoring adequacy | | - Communication timing | |
| | - Rollback readiness | | - Approval workflow | |
| | - Support preparation | | - Vendor coordination | |
| | | | | |
| +---------------------------+ +---------------------------+ |
| | | | | |
| | Environmental | | Documentation | |
| | - Infrastructure state | | - Plan completeness | |
| | - Dependency behaviour | | - Runbook accuracy | |
| | - External factors | | - Knowledge capture | |
| | | | | |
| +---------------------------+ +---------------------------+ |
| |
+-----------------------------------------------------------------------+

Figure 2: Lesson categorisation enabling pattern analysis across reviews

  1. Write each lesson as a complete statement that conveys meaning without requiring the full PIR context. Poor: “The certificate issue caused delays.” Better: “TLS certificate chain validation failed because the intermediate certificate was not included in the configuration bundle, causing 30 minutes of troubleshooting during the implementation window.”

  2. Assign a category and subcategory to each lesson using the standard taxonomy. This enables querying lessons by type when planning similar future changes.

  3. Indicate lesson polarity: positive lessons reinforce practices to continue, negative lessons identify practices to change, and neutral lessons note observations without clear directional guidance.

  4. Link lessons to specific change types, technologies, or services where applicable. A lesson about certificate handling in the email gateway migration should be tagged with “TLS”, “email”, and “gateway” to surface when planning related changes.

  5. Record lessons in the knowledge management system using the standard article template. Each lesson becomes a searchable knowledge article that can be referenced in future change planning.

# Knowledge article metadata for lesson learned
article_type: lesson_learned
source_pir: PIR-2024-0847
source_change: CHG0012847
category: change_execution
subcategory: technical
polarity: negative
tags:
- tls
- certificates
- email
- gateway
technologies:
- postfix
- lets-encrypt
created: 2024-11-18
author: j.smith@example.org

Create and track improvement actions

  1. Draft each improvement action with specific, measurable criteria for completion. Vague: “Improve certificate handling.” Specific: “Update the TLS deployment checklist to include certificate chain validation step, and add automated chain verification to the deployment pipeline by 2024-12-15.”

  2. Assign an owner who has authority to complete the action or escalate appropriately. The owner need not perform the work personally but is accountable for completion.

  3. Set a target date based on action complexity and owner capacity. Quick wins (documentation updates, checklist additions) target 2 weeks. Process changes target 4-6 weeks. Tool implementations target 8-12 weeks.

  4. Register actions in the continual improvement register with the PIR reference. This creates visibility for improvement governance and prevents duplicate efforts across reviews.

Terminal window
# Register improvement action
curl -X POST "https://itsm.example.org/api/v1/improvements" \
-H "Authorization: Bearer $ITSM_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"title": "Add certificate chain validation to deployment pipeline",
"description": "Automated verification that TLS certificate chains are complete before deployment proceeds",
"source_type": "pir",
"source_ref": "PIR-2024-0847",
"owner": "j.smith@example.org",
"target_date": "2024-12-15",
"category": "process",
"priority": "medium"
}'
  1. Schedule follow-up verification for each action. Add calendar reminders for the target date plus 5 working days to confirm completion and close the action.

The action tracking workflow ensures improvements progress from identification through implementation:

+--------------------------------------------------------------------+
| IMPROVEMENT ACTION WORKFLOW |
+--------------------------------------------------------------------+
| |
| PIR Meeting |
| | |
| v |
| +---+---+ +----------+ +----------+ +----------+ |
| | | | | | | | | |
| |Identify+----> Draft +-----> Assign +-----> Register | |
| | | | Action | | Owner | | in | |
| +-------+ +----------+ +----------+ | CSI | |
| +----+-----+ |
| | |
| +-----------------------------------------------+ |
| | |
| v |
| +----+-----+ +----------+ +----------+ +----------+ |
| | | | | | | | | |
| | Owner +-----> Work +-----> Verify +-----> Close | |
| | Accepts | | Action | | Complete | | Action | |
| +----------+ +----+-----+ +----------+ +----------+ |
| | |
| | If blocked |
| v |
| +----+-----+ |
| | | |
| | Escalate | |
| | | |
| +----------+ |
| |
+--------------------------------------------------------------------+

Figure 3: Action tracking from identification through verified completion

Finalise and distribute the report

  1. Compile the PIR report within 3 working days of the review meeting. The report consolidates all findings into a single document that serves as the permanent record of the review.

  2. Structure the report with the following sections: executive summary, change overview, success criteria assessment, implementation timeline analysis, incidents and issues, lessons learned, improvement actions, and appendices containing raw data.

  3. Review the draft report with the change owner before distribution. Confirm factual accuracy and appropriate tone. PIR reports should be honest without being punitive; the goal is learning, not blame.

  4. Distribute the final report to: all PIR participants, the change owner’s manager, the service owner, and the CAB representative. For major changes, include the IT leadership team.

  5. Archive the report in the knowledge management system linked to the change record. Update the change record with the PIR reference number and overall assessment.

Terminal window
# Update change record with PIR completion
curl -X PATCH "https://itsm.example.org/api/v1/changes/CHG0012847" \
-H "Authorization: Bearer $ITSM_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"pir_completed": true,
"pir_reference": "PIR-2024-0847",
"pir_outcome": "successful_with_issues",
"pir_date": "2024-11-18"
}'
  1. Update relevant knowledge base articles based on lessons learned. If the PIR identified documentation gaps or inaccuracies, create tasks to address them and link to the PIR as justification.

Verification

After completing the PIR process, verify that all outputs exist and are properly linked.

Confirm the PIR document exists and contains required sections:

Terminal window
# Verify PIR document completeness
curl -s "https://docs.example.org/api/v1/articles/PIR-2024-0847" \
-H "Authorization: Bearer $DOCS_TOKEN" | \
jq '{
exists: (.id != null),
has_summary: (.content | contains("Executive Summary")),
has_criteria: (.content | contains("Success Criteria")),
has_lessons: (.content | contains("Lessons Learned")),
has_actions: (.content | contains("Improvement Actions")),
linked_change: .metadata.source_change
}'

Expected output:

{
"exists": true,
"has_summary": true,
"has_criteria": true,
"has_lessons": true,
"has_actions": true,
"linked_change": "CHG0012847"
}

Verify improvement actions are registered:

Terminal window
# Query improvement register for PIR actions
curl -s "https://itsm.example.org/api/v1/improvements" \
-H "Authorization: Bearer $ITSM_TOKEN" \
--data-urlencode "query=source_ref=PIR-2024-0847" | \
jq '.result[] | {id, title, owner, target_date, status}'

Confirm change record updated with PIR reference:

Terminal window
curl -s "https://itsm.example.org/api/v1/changes/CHG0012847" \
-H "Authorization: Bearer $ITSM_TOKEN" | \
jq '{pir_completed, pir_reference, pir_outcome, pir_date}'

Expected output showing PIR linkage:

{
"pir_completed": true,
"pir_reference": "PIR-2024-0847",
"pir_outcome": "successful_with_issues",
"pir_date": "2024-11-18"
}

Verify lesson articles created in knowledge base:

Terminal window
# Count lessons linked to PIR
curl -s "https://docs.example.org/api/v1/articles" \
-H "Authorization: Bearer $DOCS_TOKEN" \
--data-urlencode "query=metadata.source_pir=PIR-2024-0847" | \
jq '.total_count'

The count should match the number of lessons documented in the PIR meeting.

Troubleshooting

SymptomCauseResolution
Participants unavailable within required timeframeStaff on leave, competing priorities, or inadequate noticeSchedule PIR during change planning phase by blocking calendar time at implementation + review interval; escalate to service owner if availability remains blocked
Success criteria not documented in change recordChange approved without explicit criteria, or criteria discussed verbally but not recordedReconstruct criteria with change owner based on change justification; document gap as lesson learned; update change template to require criteria
Monitoring data unavailable for affected servicesServices not instrumented, monitoring retention too short, or access restrictionsUse available proxy metrics (user complaints, support tickets); document monitoring gap as improvement action; extend retention for critical services
PIR meeting becomes blame sessionParticipants defensive, focus on individual actions rather than systemic factorsRestate ground rules; redirect from “who” to “what” and “why”; if behaviour continues, pause meeting and resume after private conversation with participants
Implementation timeline cannot be reconstructedLogs deleted, no timestamps recorded, or work performed without documentationGather participant recollections; accept reduced accuracy; document logging gap as improvement action
Lessons repeat across multiple PIRsPrevious improvement actions not completed, or not addressing root causeQuery improvement register for related open actions; escalate completion blockers; consider whether lesson indicates systemic issue requiring different intervention
No improvement actions identified from reviewReview too superficial, participants unwilling to critique, or genuinely smooth implementationProbe with specific questions about risk areas; review against change management maturity criteria; if truly no improvements, document as positive outcome
Change owner disputes PIR assessmentDisagreement on criteria interpretation or measurementRefer to documented criteria; if ambiguous, note dispute in report; escalate to CAB chair for resolution if agreement impossible
Actions assigned to people without capacityOwner overloaded or lacks authority to complete actionReassign to appropriate owner or escalate to manager for resource allocation; break large actions into smaller deliverables
PIR report distribution blocked by sensitivity concernsLessons involve personnel issues or vendor disputesCreate redacted version for general distribution; restricted version for relevant managers; legal review if contractual issues involved
Knowledge base articles not searchableIncorrect metadata, missing tags, or indexing delayVerify article metadata matches taxonomy; manually add tags; allow 24 hours for index refresh; check article visibility permissions
Improvement action marked complete but problem recursAction addressed symptom rather than cause, or implementation incompleteReopen action; conduct deeper analysis potentially involving Problem Management; verify completion criteria were actually met

Contextual considerations

Organisations with limited IT capacity often struggle to conduct PIRs consistently. A single IT person handling multiple responsibilities may view PIRs as administrative overhead that delays moving to the next urgent task. For these contexts, adopt a lightweight approach: conduct PIR as a 15-minute self-review using a standard checklist, document only the most significant lesson, and register only the highest-impact improvement action. This minimal approach captures value without creating unsustainable process burden.

The checklist for self-review PIRs covers five questions: Did the change achieve its objective? What took longer than expected? What would you do differently? What documentation needs updating? What one improvement would most help the next similar change? Recording answers to these questions takes under 10 minutes and creates sufficient record for future reference.

Organisations with federated IT structures face coordination challenges when changes span multiple autonomous teams. Each team may have different PIR practices, documentation locations, and improvement registers. For cross-team changes, designate a lead team responsible for facilitating the PIR and consolidating findings. Distribute the final report to all participating teams and register improvement actions in each team’s register as appropriate. Where actions require coordination across teams, escalate to the governance body that spans those teams.

For organisations operating in field environments with connectivity constraints, PIR meetings may need to occur asynchronously. Distribute the PIR preparation pack via email with a deadline for written feedback. Compile responses into a consolidated document and circulate for final comment before finalising. While this approach loses the dynamic discussion of synchronous meetings, it enables participation from staff in locations where video conferencing is unreliable.

Grant-funded changes require additional PIR considerations. Document whether the change delivered the capabilities described in the grant proposal and whether implementation costs aligned with budget. This information supports grant reporting and informs future proposal development. If implementation revealed scope changes or cost variances, note these for finance and grants management teams.

See also