Skip to main content

Continual Improvement

Continual improvement is the practice of systematically identifying opportunities to enhance IT services and processes, then implementing changes that deliver measurable benefits. The practice operates through a register of improvement initiatives, each assessed for value, effort, and risk, then prioritised against organisational capacity. For mission-driven organisations where IT resources are constrained and competing demands are constant, continual improvement provides the mechanism for ensuring that limited capacity addresses the changes with greatest impact rather than those with loudest advocates.

Continual Improvement
The ongoing effort to enhance services, processes, and practices through incremental and transformational changes. Distinct from project-based change in that it operates continuously rather than in discrete bounded initiatives.
CSI Register
A structured repository of improvement opportunities, each documented with business case, priority, status, and ownership. The register serves as both a backlog and a governance tool.
Improvement Initiative
A specific, bounded effort to enhance some aspect of IT service delivery. Initiatives range from minor process adjustments requiring hours to implement, through to substantial changes requiring months of effort.
Benefits Realisation
The practice of tracking whether implemented improvements deliver their anticipated value. Realisation occurs when measured outcomes match or exceed the business case projections.
Value Leakage
The gap between potential improvement value and actual realised value. Leakage occurs through incomplete implementation, adoption failures, or changed circumstances that reduce benefit applicability.

The improvement cycle

The fundamental mechanism of continual improvement is a feedback loop that connects service performance observation to deliberate enhancement action. This loop operates at multiple timescales simultaneously: individual incidents trigger immediate process questions, monthly service reviews surface systemic patterns, and annual assessments drive strategic capability investments.

The Plan-Do-Check-Act cycle (PDCA) provides the foundational structure for improvement work. Plan establishes what change will be made and what outcome is expected. Do implements the change in a controlled manner. Check measures actual results against expected results. Act determines whether to standardise the change, modify it, or abandon it. Each iteration through the cycle produces learning that informs subsequent iterations.

+--------------------------------------------------------------------+
| PLAN-DO-CHECK-ACT CYCLE |
+--------------------------------------------------------------------+
| |
| +-------------+ |
| | | |
| | PLAN | |
| | | |
| | - Define | |
| | objective | |
| | - Analyse | |
| | current | |
| | state | |
| | - Design | |
| | change | |
| | - Set | |
| | measures | |
| +------+------+ |
| | |
| +------------------------+------------------------+ |
| | | |
| v | |
| +-----+------+ +------+-------+ |
| | | | | |
| | DO | | ACT | |
| | | | | |
| | - Pilot | | - Standardise| |
| | change | | if success | |
| | - Train | | - Modify if | |
| | staff | | partial | |
| | - Execute | | - Abandon if | |
| | plan | | failed | |
| | - Document | | - Update | |
| | issues | | register | |
| +-----+------+ +------+-------+ |
| | ^ |
| | | |
| +------------------------+------------------------+ |
| | |
| +------v------+ |
| | | |
| | CHECK | |
| | | |
| | - Measure | |
| | results | |
| | - Compare | |
| | to plan | |
| | - Identify | |
| | gaps | |
| | - Document | |
| | learning | |
| +-------------+ |
| |
+--------------------------------------------------------------------+

Figure 1: Plan-Do-Check-Act cycle showing the four phases and their key activities

A service desk improvement illustrates the cycle in practice. Plan: analysis shows that 34% of incidents are password resets, consuming 12 hours of analyst time weekly; the objective is to reduce this to under 10% through self-service password reset; success measure is weekly password reset ticket count. Do: deploy self-service reset tool, communicate to staff, train service desk on new escalation path, run for 30 days. Check: after 30 days, password reset tickets dropped to 18% of volume (not 10% target), saving 6 hours weekly; user survey shows 23% unaware of self-service option. Act: the partial success warrants standardisation with modification; additional communication campaign planned; target revised to 12% for next iteration.

The cycle duration varies by improvement scope. Minor process adjustments complete a cycle in 1-2 weeks. Service enhancements involving tooling changes require 1-3 months. Capability improvements affecting multiple services span 3-6 months. Strategic improvements transforming how IT operates take 6-18 months. Matching cycle duration to improvement scope prevents both premature judgement of complex changes and excessive delay in realising simple wins.

Sources of improvement opportunities

Improvement opportunities emerge from multiple channels, each with distinct characteristics that affect how opportunities should be captured and assessed. Reactive sources respond to problems that have already occurred. Proactive sources anticipate problems or identify enhancement possibilities before issues manifest.

+--------------------------------------------------------------------+
| IMPROVEMENT OPPORTUNITY SOURCES |
+--------------------------------------------------------------------+
| |
| REACTIVE SOURCES PROACTIVE SOURCES |
| (responding to problems) (anticipating needs) |
| |
| +------------------------+ +------------------------+ |
| | | | | |
| | INCIDENTS | | TREND ANALYSIS | |
| | - Recurring issues | | - Capacity forecasts | |
| | - Major incidents | | - Usage patterns | |
| | - Near misses | | - Cost trajectories | |
| | | | | |
| +----------+-------------+ +----------+-------------+ |
| | | |
| v v |
| +----------+-------------+ +----------+-------------+ |
| | | | | |
| | PROBLEM RECORDS | | BENCHMARKING | |
| | - Root cause findings | | - Peer comparison | |
| | - Workaround debt | | - Industry standards | |
| | - Known error backlog | | - Best practices | |
| | | | | |
| +----------+-------------+ +----------+-------------+ |
| | | |
| v v |
| +----------+-------------+ +----------+-------------+ |
| | | | | |
| | USER FEEDBACK | | AUDITS/ASSESSMENTS | |
| | - Complaints | | - Security audits | |
| | - Survey responses | | - Compliance reviews | |
| | - Support comments | | - Maturity assessments| |
| | | | | |
| +----------+-------------+ +----------+-------------+ |
| | | |
| | | |
| +------------------+------------------+ |
| | |
| v |
| +-----------+-----------+ |
| | | |
| | CSI REGISTER | |
| | | |
| | - Capture | |
| | - Assess | |
| | - Prioritise | |
| | - Track | |
| | | |
| +-----------------------+ |
| |
+--------------------------------------------------------------------+

Figure 2: Improvement opportunity sources flowing into the CSI register

Incident records generate improvement opportunities when patterns emerge. A single incident is a service restoration event; ten similar incidents within a month indicate a systemic issue warranting improvement attention. The incident management process should flag recurring incidents automatically: any incident category exceeding 5 occurrences monthly, any single configuration item generating more than 3 incidents monthly, or any incident type where mean resolution time exceeds SLA by more than 50%.

Problem records contain improvement opportunities by design. Each problem investigation identifies root causes; eliminating those causes is improvement work. The known error database accumulates technical debt when workarounds substitute for permanent fixes. A quarterly review of the known error database identifies workarounds older than 90 days that should escalate to formal improvement initiatives.

User feedback provides perspective that internal metrics miss. Satisfaction surveys capture subjective experience. Support ticket comments reveal friction points. Complaints, while uncomfortable, identify where service delivery fails user expectations. Effective feedback capture requires multiple channels: post-ticket surveys (targeting 20% response rate), quarterly broader surveys, and open feedback mechanisms. Feedback should route to the CSI register when it identifies specific, actionable improvements rather than general dissatisfaction.

Service reviews scheduled monthly or quarterly examine performance against SLAs and identify trends before they breach thresholds. A service running at 97% availability against a 99% target warrants improvement attention before it breaches. Service reviews should generate at least 2-3 improvement candidates per review cycle.

Audit and assessment findings carry compliance weight. Security audit findings often mandate remediation within fixed timeframes. Maturity assessments identify capability gaps. Compliance reviews surface regulatory risks. These findings enter the CSI register with external deadlines that affect prioritisation.

Technology changes in the broader environment create improvement opportunities. Vendor announcements of new capabilities, end-of-support dates, or pricing changes all warrant evaluation. A vendor announcing end-of-support in 18 months creates an improvement initiative for migration or replacement.

Benchmarking against peer organisations and industry standards reveals gaps that internal perspective misses. Organisations participating in sector benchmarking studies gain comparative data: if peer organisations achieve first-call resolution rates of 70% while your service desk achieves 55%, the 15-point gap represents improvement potential.

The CSI register

The CSI register functions as the central repository for improvement opportunities, serving three purposes simultaneously: it captures opportunities so they are not lost, it provides data for prioritisation decisions, and it tracks implementation progress. Without a register, improvements depend on individual memory and advocacy, which favours recent and loudly-championed items over genuinely valuable ones.

Each register entry contains structured information enabling assessment and tracking. The minimum viable entry includes: a unique identifier, title, description of the current state and desired future state, source of the opportunity, initial value estimate, initial effort estimate, owner, status, and key dates. Richer entries add quantified business cases, risk assessments, dependencies, and implementation plans.

+------------------------------------------------------------------+
| CSI REGISTER WORKFLOW |
+------------------------------------------------------------------+
| |
| +------------------+ |
| | | |
| | OPPORTUNITY +------+ |
| | IDENTIFIED | | |
| | | | |
| +------------------+ | |
| v |
| +--------+--------+ |
| | | |
| | CAPTURED | |
| | | |
| | - Log entry | |
| | - Basic detail | |
| | - Source noted | |
| | - Owner assign | |
| +--------+--------+ |
| | |
| v |
| +--------+--------+ |
| | | |
| | ASSESSED | |
| | | |
| | - Value scored | |
| | - Effort est. | |
| | - Risk eval. | |
| | - Dependencies | |
| +--------+--------+ |
| | |
| +-------------+-------------+ |
| | | |
| v v |
| +---------+---------+ +---------+---------+ |
| | | | | |
| | PRIORITISED | | DEFERRED/ | |
| | | | REJECTED | |
| | - Ranked | | | |
| | - Scheduled | | - Reason logged | |
| | - Resources | | - Review date | |
| | allocated | | (if deferred) | |
| +---------+---------+ +-------------------+ |
| | |
| v |
| +---------+---------+ |
| | | |
| | IN PROGRESS | |
| | | |
| | - Work underway | |
| | - Progress track | |
| | - Issues logged | |
| +---------+---------+ |
| | |
| v |
| +---------+---------+ |
| | | |
| | IMPLEMENTED | |
| | | |
| | - Change done | |
| | - Handover to | |
| | operations | |
| +---------+---------+ |
| | |
| v |
| +---------+---------+ |
| | | |
| | BENEFITS | |
| | REALISED | |
| | | |
| | - Measured | |
| | - Documented | |
| | - Closed | |
| +-------------------+ |
| |
+------------------------------------------------------------------+

Figure 3: CSI register workflow showing status progression from capture through benefits realisation

The capture process should be lightweight to encourage logging. An improvement opportunity that requires 30 minutes to document will often go unrecorded. Initial capture should take under 5 minutes: title, one-paragraph description, source, and rough value/effort indication. Detailed assessment follows for opportunities that pass initial screening.

Assessment transforms rough ideas into comparable initiatives. Value estimation considers multiple dimensions: cost reduction, risk reduction, service improvement, compliance achievement, and strategic enablement. Effort estimation considers staff time, elapsed duration, financial cost, and organisational change required. Both estimates should use consistent scales that enable comparison.

A worked example illustrates the assessment mechanism. An improvement opportunity proposes automating the monthly licence compliance report, currently requiring 6 hours of manual effort. Value estimation: 6 hours monthly × 12 months × £45/hour fully-loaded cost = £3,240 annual value, plus reduced error risk (2 compliance findings in past year attributed to manual errors, each requiring 8 hours remediation = £720), total annual value approximately £4,000. Effort estimation: 20 hours development, 5 hours testing, 3 hours documentation = 28 hours × £45 = £1,260, plus tool licensing at £500 annually, first-year cost £1,760. The initiative shows positive return in year one (£4,000 - £1,760 = £2,240) and approximately £3,500 annually thereafter.

Register maintenance requires ongoing attention. Stale entries accumulate when opportunities remain unaddressed: an improvement logged 18 months ago but never prioritised either warrants action or should be closed. Monthly register hygiene reviews should examine all entries over 6 months old, updating status or closing those no longer relevant. Deferred items require review dates; an improvement deferred “until next budget year” should have a specific review date set.

Prioritisation mechanisms

Prioritisation determines which improvement opportunities receive resources. Without explicit prioritisation, allocation defaults to recency bias (most recent requests win), hierarchy bias (senior requestors win), or volume bias (most-discussed items win). None of these correlate with actual value delivery.

The value-effort matrix provides the simplest prioritisation mechanism, plotting each improvement opportunity on two axes. Value ranges from low to high based on assessment. Effort ranges from low to high based on resource requirements. The four quadrants yield prioritisation guidance:

+------------------------------------------------------------------+
| VALUE-EFFORT MATRIX |
+------------------------------------------------------------------+
| |
| HIGH | CONSIDER | PRIORITISE |
| VALUE | | |
| | High value but | High value with |
| | high effort | low effort |
| | | |
| | - Major projects | - Quick wins |
| | - Strategic | - Immediate action |
| | initiatives | - High ROI |
| | - Careful | |
| | planning req. | |
| | | |
| | Example: | Example: |
| | ITSM platform | Self-service |
| | replacement | password reset |
| | (180 hrs, £50k | (28 hrs, £4k |
| | annual value) | annual value) |
| -------+-------------------+------------------- |
| | AVOID | FILL-IN |
| LOW | | |
| VALUE | Low value and | Low value but |
| | high effort | low effort |
| | | |
| | - Reject or | - Do when |
| | defer | convenient |
| | - Challenge | - Batch together |
| | assumptions | - Use slack time |
| | | |
| | Example: | Example: |
| | Custom reporting | Rename ticket |
| | tool (120 hrs, | categories |
| | £800 annual | (4 hrs, £200 |
| | value) | annual value) |
| | | |
| +-------------------+------------------- |
| |
| HIGH EFFORT LOW EFFORT |
| |
+------------------------------------------------------------------+

Figure 4: Value-effort matrix with example initiatives in each quadrant

The matrix operates through ratio comparison. An initiative delivering £4,000 annual value for 28 hours effort yields approximately £143 per hour invested. An initiative delivering £50,000 annual value for 180 hours effort yields approximately £278 per hour invested. Despite requiring 6.4 times more effort, the larger initiative delivers better return per hour and warrants prioritisation if resources permit.

Risk weighting adds a third dimension. Some improvements address compliance requirements with fixed deadlines; failure to implement creates regulatory exposure. Others address security vulnerabilities where delayed action increases breach likelihood. Risk-weighted prioritisation elevates items where inaction carries consequence beyond foregone value.

A three-factor scoring model combines value, effort, and risk into a single priority score. Each factor scores on a 1-5 scale:

ScoreValueEffortRisk
1Under £1,000 annuallyOver 200 hoursNo compliance/security impact
2£1,000-5,000 annually100-200 hoursMinor policy deviation
3£5,000-20,000 annually40-100 hoursModerate compliance gap
4£20,000-50,000 annually10-40 hoursSignificant regulatory risk
5Over £50,000 annuallyUnder 10 hoursCritical compliance failure

The priority score formula weights the factors: Priority = (Value × 2) + (6 - Effort) + (Risk × 1.5). The formula double-weights value as the primary driver, inverts effort so lower effort scores higher, and applies 1.5 weighting to risk. A quick win with value score 3, effort score 5, and risk score 1 yields priority score: (3 × 2) + (6 - 5) + (1 × 1.5) = 6 + 1 + 1.5 = 8.5. A major project with value score 5, effort score 1, and risk score 4 yields: (5 × 2) + (6 - 1) + (4 × 1.5) = 10 + 5 + 6 = 21. The major project ranks higher despite its effort, because combined value and risk outweigh the effort penalty.

Dependency sequencing constrains prioritisation. An improvement requiring a platform upgrade cannot proceed until the upgrade completes. An automation initiative depending on API access cannot start until that access is provisioned. The CSI register should capture dependencies explicitly, and prioritisation should respect them: initiating a dependent item before its prerequisite wastes resources.

Governance structures

Governance determines who makes improvement decisions and how. The appropriate structure scales with organisational complexity and improvement volume. Insufficient governance allows conflicting priorities and duplicated effort; excessive governance slows improvement delivery and discourages participation.

Individual IT professional contexts, where one person handles all IT responsibilities, require minimal formal governance. The IT professional maintains the CSI register, makes prioritisation decisions, and allocates their own time to improvements. Governance consists of documenting decisions so organisational leadership understands how IT time is allocated. A monthly summary to leadership covering active improvements, completions, and planned work provides accountability without bureaucratic overhead.

Small IT team contexts with 2-5 staff benefit from regular review meetings. A fortnightly improvement review of 30-45 minutes examines the CSI register, assesses new entries, reviews progress on active items, and adjusts priorities based on changing circumstances. The IT manager or lead typically chairs, with all team members contributing. Decisions are documented in the register itself.

Established IT function contexts with dedicated staff require more formal structures. A Continual Improvement Manager role (potentially part-time, combined with other responsibilities) provides coordination: maintaining the register, facilitating prioritisation, tracking progress, and reporting to IT leadership. A monthly improvement board reviews strategic items, approves resource allocation for major initiatives, and receives progress reports on the portfolio. The board includes IT leadership and may include business stakeholders for improvements with significant user impact.

Federated IT structures with multiple IT teams or country offices require coordination mechanisms. Each local team maintains its own CSI register and governance. A global coordination function identifies improvements applicable across locations, enabling consistent solutions and avoiding duplicated effort. Quarterly cross-team reviews share improvement outcomes and identify adoption opportunities.

Governance scope varies by improvement size. Quick wins under 20 hours effort may require only team lead approval. Medium improvements of 20-80 hours require IT management approval. Large improvements over 80 hours require improvement board or steering committee approval. This tiered approach prevents bottlenecks while maintaining control over significant resource commitments.

Improvement models

Beyond PDCA, several improvement models provide structured approaches for different contexts. Selecting the appropriate model depends on the improvement type, available data, and organisational familiarity with the approach.

Lean principles focus on eliminating waste from processes. Lean identifies eight waste types: defects, overproduction, waiting, non-utilised talent, transportation, inventory, motion, and extra processing. For IT service improvement, the most relevant waste types are waiting (queues, handoffs, approvals), defects (rework, errors, incidents), and extra processing (unnecessary steps, over-engineering). A Lean improvement initiative maps the current process, identifies waste at each step, and redesigns to eliminate or minimise waste.

Consider a change approval process taking 5 days on average. Lean analysis reveals: 2 days waiting in queue for CAB review, 1 day waiting for technical review, 0.5 days actual review work, 1.5 days waiting for approval signatures. Of 5 days elapsed time, 4.5 days (90%) is wait time. Lean improvement targets the wait waste: implement continuous CAB review rather than weekly meetings (eliminating 2-day queue), parallel technical review with CAB submission (eliminating 1-day sequential wait), electronic approval workflow (reducing signature wait from 1.5 days to same-day). Redesigned process achieves 1.5-day average duration, a 70% reduction.

Value stream mapping extends Lean analysis to end-to-end service delivery. A value stream map documents each step from service request to delivery, recording process time (when work is actively occurring) and wait time (when the request is idle). The ratio of process time to total time indicates efficiency. Service delivery value streams in IT commonly show 5-15% efficiency, meaning 85-95% of elapsed time involves no value-adding activity.

Six Sigma provides data-driven improvement methodology using statistical analysis. The DMAIC cycle (Define, Measure, Analyse, Improve, Control) structures improvement projects. Define establishes the problem and goal. Measure quantifies current performance. Analyse identifies root causes using statistical tools. Improve implements solutions. Control sustains gains through monitoring and adjustment.

Six Sigma suits improvements where variation causes problems. Incident resolution time varying from 30 minutes to 8 hours for similar incidents indicates process variation that statistical analysis can diagnose. Measurement might reveal that resolution time correlates with day of week (staffing variation), incident category (process gaps), or individual analyst (skill variation). Analysis isolates which factors contribute most to variation, enabling targeted improvement.

Kaizen emphasises small, continuous improvements made by the people doing the work. Rather than large improvement projects, Kaizen encourages daily identification and implementation of minor enhancements. A service desk analyst noticing that a common question lacks a knowledge article writes the article immediately. A system administrator recognising a manual step that could be scripted creates the script. The cumulative effect of many small improvements produces significant gains without the overhead of formal improvement initiatives.

Kaizen requires cultural support. Staff must have authority to make small changes without approval overhead. Time must be allocated for improvement work alongside operational duties. Recognition should celebrate improvement contributions. Organisations new to continual improvement often begin with Kaizen as the entry point, building improvement habits before introducing more structured methodologies.

Implementation and tracking

Implementation transforms approved improvements from plans into operational reality. The implementation approach depends on improvement complexity and organisational change management requirements.

Simple improvements involving process adjustments or documentation updates implement directly. The improvement owner executes the change, updates relevant documentation, and communicates to affected staff. Verification confirms the change is in place and functioning. The register status moves to implemented, and benefits tracking begins.

Complex improvements require structured implementation planning. The plan specifies tasks, dependencies, owners, timelines, and milestones. Implementation may proceed through the standard change management process for changes affecting live services. For improvements involving significant organisational change, communication plans, training requirements, and adoption support extend the implementation scope.

Progress tracking monitors implementation status against plan. The CSI register entry should capture: planned completion date, current status, percentage complete, blockers or issues, and next milestone. Regular review meetings examine all in-progress improvements, identify items deviating from plan, and determine corrective actions.

Blockers require escalation and resolution. A stalled improvement wastes the resources already invested and delays value realisation. Common blockers include: resource unavailability (staff allocated to operational priorities), dependency delays (prerequisite items not completing), scope creep (requirements expanding beyond original approval), and technical obstacles (implementation proving harder than estimated). The improvement owner should escalate blockers that cannot be resolved within their authority. Governance bodies should monitor blocker age: any blocker unresolved for more than 2 weeks warrants leadership attention.

Benefits realisation

Implementation completion does not equal improvement success. Benefits realisation verifies that the improvement delivers its expected value. Without realisation measurement, improvement decisions rely on estimates rather than evidence, and unsuccessful improvements continue consuming resources.

Benefits realisation begins with baseline measurement before implementation. The licence compliance automation example requires measuring current effort: track time spent on manual reporting for 2-3 months to establish the 6-hour monthly baseline. Without baseline measurement, post-implementation claims of savings lack supporting evidence.

Post-implementation measurement uses the same metrics after the improvement is operational. Measurement should occur after sufficient time for stabilisation; improvements involving behaviour change require adoption time. A 3-month stabilisation period before measuring benefits is appropriate for most improvements.

Realisation calculation compares actual outcomes to business case projections. The licence automation initiative projected £4,000 annual savings. Post-implementation measurement shows 1.5 hours monthly spent on the automated process (compared to projected 0.5 hours due to required manual validation steps), yielding 4.5 hours saved rather than 5.5 hours. Actual annual value: 4.5 hours × 12 months × £45 = £2,430, plus reduced errors at £720 = £3,150 total. This represents 79% of projected value; the improvement succeeded but underperformed the business case.

Value leakage analysis investigates gaps between projected and realised value. Leakage sources include: over-optimistic projections (business case assumptions not validated), incomplete implementation (not all planned components delivered), adoption failure (users not using new capabilities), and changed circumstances (underlying situation shifted during implementation). Understanding leakage sources improves future estimation accuracy and identifies remediation opportunities.

Benefits realisation reporting provides feedback to governance structures. Quarterly realisation reports summarise: improvements completed in the period, projected versus actual value, realisation percentage, and leakage causes. This data calibrates prioritisation decisions: if improvement projections consistently overestimate value by 20-30%, prioritisation should discount projected values accordingly.

Embedding improvement in operations

Continual improvement achieves greatest impact when embedded in daily operations rather than operating as a separate activity. Integration points include incident management, problem management, service reviews, and individual work patterns.

Incident closure should prompt improvement consideration. The incident closure process includes a field asking whether the incident revealed an improvement opportunity. Analysts learn to recognise patterns: if resolving an incident required knowledge not easily found, that is a knowledge article opportunity; if resolution required steps that could be automated, that is an automation opportunity; if the incident recurred from a known cause, that is a permanent fix opportunity. The field populates from pick-list options enabling reporting: in a typical month, incident closure might identify 15-25 improvement opportunities, of which 5-10 warrant CSI register logging.

Problem closure explicitly creates improvement opportunities. Every closed problem should either have an associated improvement initiative for permanent resolution, or documented justification for why permanent resolution is not pursued. The problem management process links to the CSI register; problem records reference improvement initiatives, and improvement initiatives reference source problems.

Service review meetings dedicate agenda time to improvement. Rather than reviewing only performance metrics, reviews should examine: what prevented better performance, what improvements are underway to address gaps, and what new improvement opportunities emerged this period. Review minutes capture improvement actions with owners and due dates.

Time allocation for improvement work requires explicit protection. Without allocated time, operational demands consume all available capacity. Approaches include: dedicated improvement days (one day per fortnight reserved for improvement work), improvement sprints (focused improvement periods quarterly), and percentage allocation (10-15% of each person’s time for improvement activities). The chosen approach depends on operational predictability; organisations with highly variable operational load may find dedicated days impractical, while percentage allocation provides flexibility.

Scaling by organisational capacity

Organisational capacity determines appropriate continual improvement scope and governance. Attempting sophisticated improvement programmes without supporting capacity produces frustration and failure; under-investing in improvement when capacity exists foregoes value.

Minimal capacity contexts (IT responsibilities handled alongside other duties) should focus on capturing improvement ideas without elaborate process. A simple list (spreadsheet, shared document, or task list) suffices as a register. Prioritisation uses informal judgement rather than scoring models. Implementation integrates with regular work rather than operating as separate initiatives. Success looks like: 3-5 improvements implemented annually, visible value delivered, no improvement taking more than a few weeks of effort.

Single IT person contexts benefit from structured capture but lightweight governance. The CSI register should be more than a list, capturing value and effort estimates enabling comparison. Monthly self-review examines the register and plans the coming month’s improvement focus. Quarterly review with management provides accountability and visibility. Success looks like: 8-15 improvements implemented annually, documented value delivered, some improvements spanning multiple weeks.

Small IT team contexts support formal register management and regular governance meetings. Value-effort prioritisation enables objective comparison. Fortnightly or monthly improvement reviews maintain momentum. Some improvements may qualify as formal projects. Success looks like: 15-30 improvements implemented annually, measured benefits realised, mix of quick wins and larger initiatives.

Established IT function contexts warrant dedicated improvement coordination and structured governance. Improvement managers coordinate the portfolio. Improvement boards govern significant initiatives. Benefits realisation measurement tracks value delivery. Continuous improvement methodology (Lean, Six Sigma, or similar) may be adopted. Success looks like: 30+ improvements implemented annually, comprehensive benefits tracking, strategic improvements transforming service delivery.

Starting point for new programmes

Organisations establishing continual improvement for the first time should begin with capture and quick wins. Implement a simple register, identify 5-10 obvious improvement opportunities, and execute 2-3 quick wins delivering visible value. Early success builds credibility and momentum for expanding the programme.

Technology support

Technology tools support continual improvement through register management, workflow automation, and analytics. Tool selection should match organisational complexity; sophisticated platforms impose overhead that smaller organisations cannot justify.

Spreadsheet-based registers suit organisations with fewer than 30 active register entries. A spreadsheet captures required fields, enables sorting and filtering, and requires no additional licensing. Limitations emerge as volume grows: version control challenges when multiple people edit, no workflow automation, and manual status tracking.

ITSM platform integration leverages existing service management tools for improvement tracking. Platforms like ServiceNow, Freshservice, or Jira Service Management include improvement or project tracking modules. Integration provides automatic creation of improvement records from incidents or problems, workflow-driven status progression, and consolidated reporting. Configuration requires effort but avoids separate tool management.

Dedicated improvement platforms serve organisations with substantial improvement programmes. Tools designed for improvement portfolio management provide advanced prioritisation, resource planning, and benefits tracking. Examples include Planview, Changepoint, or specialised continuous improvement software. The investment suits organisations implementing 50+ improvements annually with formal governance requirements.

Collaboration tools support improvement work regardless of register platform. Improvement initiatives benefit from shared workspaces, document collaboration, and communication channels. Microsoft Teams channels, Slack channels, or similar provide improvement teams with coordination spaces.

Analytics capabilities matter as improvement programmes mature. Reporting needs include: register status summaries, aging analysis (how long items remain in each status), implementation timeline tracking, benefits realisation summaries, and trend analysis (improvement sources, categories, value delivered). Initial reporting uses spreadsheet pivot tables or ITSM platform reports. Mature programmes may warrant dashboard development for real-time visibility.

See also