Skip to main content

Service Management Framework

A service management framework provides the structure through which an IT function delivers value to its organisation. The framework defines what constitutes a service, establishes how services are designed and delivered, determines who makes decisions about service priorities, and creates the feedback mechanisms that drive improvement. For mission-driven organisations, the framework must accommodate significant variation in resources, geographic distribution, and operational tempo while maintaining coherent service delivery.

The fundamental premise of service management is that IT exists to enable organisational outcomes rather than to operate technology for its own sake. This orientation shifts attention from infrastructure uptime and ticket counts toward the services that staff, partners, and beneficiaries actually consume. A grants management system, a field communications capability, a secure file sharing service: these represent services with defined purposes, users, and quality expectations. The framework provides the scaffolding for thinking about, organising, and improving these services systematically.

Service
A means of delivering value to consumers by facilitating outcomes they want to achieve without requiring them to manage specific costs and risks. The grants management system is a service; the database server running it is a component.
Service consumer
A role that uses services. Staff accessing email, field teams using mobile data collection, partners submitting reports through a portal: all are service consumers with distinct needs and expectations.
Service provider
The organisational function accountable for service delivery. In most mission-driven organisations, this is the IT team, though shared services and federated models create more complex provider relationships.
Practice
A set of organisational resources designed for performing work or accomplishing an objective. Incident management, change management, and asset management are practices that contribute to service delivery.

Service value system

The service value system describes how all components and activities of an organisation work together to enable value creation. Rather than presenting service management as a collection of independent processes, the value system emphasises their interconnection and shared purpose.

+-------------------------------------------------------------+
| SERVICE VALUE SYSTEM (SVS) |
+-------------------------------------------------------------+
| |
| [ OPPORTUNITY / DEMAND ] |
| | |
| v |
| +-------------------------------------------------------+ |
| | GUIDING PRINCIPLES | |
| | Value | Start | Progress | Collaborate | Holistic | |
| | Simple | Optimise & Automate | |
| +-------------------------------------------------------+ |
| | |
| v |
| +-------------------------------------------------------+ |
| | GOVERNANCE | |
| | Direction | Evaluation | Monitoring | |
| +-------------------------------------------------------+ |
| | |
| v |
| +-------------------------------------------------------+ |
| | SERVICE VALUE CHAIN | |
| | | |
| | +------+ +--------+ +--------+ +--------+ | |
| | | PLAN | -> | ENGAGE | -> | DESIGN | -> | OBTAIN | | |
| | +------+ +---+----+ +---+----+ +---+----+ | |
| | | | | | | |
| | | +-----+-------------+-------------+ | |
| | | | v | |
| | | | +--------------------+ | |
| | +-----+> | DELIVER & SUPPORT | | |
| | | +---------+----------+ | |
| | | | |
| | v | |
| | +-------------+ | |
| | | IMPROVE | | |
| | +-------------+ | |
| +-------------------------------------------------------+ |
| | |
| v |
| +-------------------------------------------------------+ |
| | PRACTICES | |
| | General | Service Management | Technical Management | |
| +-------------------------------------------------------+ |
| | |
| v |
| +-------------------------------------------------------+ |
| | CONTINUAL IMPROVEMENT | |
| +-------------------------------------------------------+ |
| | |
| v |
| [ VALUE ] |
| |
+-------------------------------------------------------------+

Figure 1: Service value system showing how opportunity transforms into value through governance, practices, and the service value chain

The system begins with opportunity and demand: requests for new services, incidents requiring resolution, changes needed to support programme expansion, or feedback indicating service gaps. These inputs enter a governed environment where guiding principles shape behaviour and decision-making.

The service value chain represents the operating model for creating value. Unlike linear lifecycles, the value chain is flexible: activities combine in different sequences depending on the work being done. Deploying a new service involves plan, engage, design and transition, obtain/build, and deliver and support. Resolving an incident primarily engages deliver and support with feedback into improve. The chain adapts to the work rather than forcing work into a rigid sequence.

Practices provide the detailed capabilities that value chain activities draw upon. When the deliver and support activity handles an incident, it uses the incident management practice. When plan activity develops the annual IT roadmap, it draws on portfolio management and financial management practices. Practices are resources applied where needed, not sequential steps in a workflow.

Continual improvement operates across the entire system. It applies to individual services, to practices, to governance mechanisms, and to the framework itself. Improvement is not a phase that occurs after operations stabilise; it is embedded in how the system functions.

Guiding principles

Seven principles guide behaviour and decision-making within the framework. These principles apply regardless of organisational size, resource level, or service management maturity. They provide consistency when adapting practices to specific contexts.

Focus on value requires understanding what consumers actually need rather than what providers assume they need. A field office does not want a VPN; it wants secure access to organisational systems that works reliably on limited bandwidth. The VPN is a means to that end. Service design, incident prioritisation, and change decisions flow from this value orientation. Measuring success means measuring outcomes achieved, not activities performed.

Start where you are rejects the premise that effective service management requires comprehensive tooling, extensive documentation, or mature processes as prerequisites. Every organisation has existing services, practices, and capabilities. Improvement builds on what exists rather than replacing it. An organisation managing incidents through email and spreadsheets can improve incident handling without first implementing an ITSM platform. The platform might come later; it is not the starting point.

Progress iteratively with feedback acknowledges that comprehensive plans rarely survive contact with reality. Small improvements delivered quickly generate learning that shapes subsequent improvements. Deploying a basic service catalogue to one department and refining it based on feedback produces better results than spending six months designing the perfect catalogue that no one uses. Iteration requires feedback mechanisms: user satisfaction surveys, incident trend analysis, post-implementation reviews, and regular service reviews.

Collaborate and promote visibility recognises that service management involves multiple teams, stakeholders, and perspectives. Silos between development and operations, between IT and programmes, between headquarters and field offices impede value creation. Visibility into work, decisions, and outcomes builds trust and enables coordination. Shared dashboards, joint planning sessions, and cross-functional service reviews break down barriers.

Think and work holistically prevents local optimisation at the expense of overall value. Reducing change approval time means nothing if it increases failed changes and extends incident resolution. A new system that improves one department’s efficiency while creating integration problems for others destroys more value than it creates. Holistic thinking considers upstream and downstream effects, considers services as systems rather than components, and considers organisational outcomes rather than team metrics.

Keep it simple and practical guards against over-engineering. Practices should be as simple as possible while achieving their purpose. A three-tier change approval process makes sense for an organisation with hundreds of changes monthly; it creates unnecessary friction for an organisation averaging five changes monthly. Documentation should exist where it adds value; comprehensive documentation that no one reads adds no value. Simplicity is not the same as simplistic: the right level of complexity matches the context.

Optimise and automate applies effort where it matters most. Manual work that could be automated consumes resources without adding value. But automation of poorly designed processes locks in inefficiency. The sequence matters: first understand, then simplify, then automate. Password reset automation benefits an organisation handling fifty reset requests weekly; it provides minimal return where requests occur twice monthly.

Operating models

The operating model defines how service management capability is organised and distributed across the organisation. Three primary models exist, with most organisations implementing hybrids that combine elements based on service type, geographic distribution, and resource availability.

+------------------------------------------------------------------------+
| CENTRALISED MODEL |
+------------------------------------------------------------------------+
| |
| +------------------+ |
| | CENTRAL IT | |
| | | |
| | Service Desk | |
| | Change Mgmt | |
| | Asset Mgmt | |
| | All Practices | |
| +--------+---------+ |
| | |
| +-------------------+-------------------+ |
| | | | |
| v v v |
| +------+------+ +------+------+ +------+------+ |
| | Region A | | Region B | | HQ | |
| | Consumers | | Consumers | | Consumers | |
| +-------------+ +-------------+ +-------------+ |
| |
+------------------------------------------------------------------------+
+------------------------------------------------------------------------+
| FEDERATED MODEL |
+------------------------------------------------------------------------+
| |
| +----------------+ +----------------+ +----------------+ |
| | Region A | | Region B | | Region C | |
| | IT Unit | | IT Unit | | IT Unit | |
| | | | | | | |
| | Service Desk | | Service Desk | | Service Desk | |
| | Local Changes | | Local Changes | | Local Changes | |
| | Local Assets | | Local Assets | | Local Assets | |
| +-------+--------+ +-------+--------+ +-------+--------+ |
| | | | |
| +----------------------+----------------------+ |
| | |
| v |
| +------------------+ |
| | COORDINATION | |
| | (Standards, | |
| | Policies) | |
| +------------------+ |
| |
+------------------------------------------------------------------------+
+------------------------------------------------------------------------+
| HYBRID MODEL |
+------------------------------------------------------------------------+
| |
| +------------------+ |
| | CENTRAL IT | |
| | | |
| | Strategy | |
| | Architecture | |
| | Major Changes | |
| | Enterprise Apps | |
| +--------+---------+ |
| | |
| +--------------------+--------------------+ |
| | | | |
| v v v |
| +-------+--------+ +-------+--------+ +-------+--------+ |
| | Region A | | Region B | | Region C | |
| | | | | | | |
| | Local Support | | Local Support | | Local Support | |
| | Local Assets | | Local Assets | | Local Assets | |
| | Standard Chgs | | Standard Chgs | | Standard Chgs | |
| +----------------+ +----------------+ +----------------+ |
| |
+------------------------------------------------------------------------+

Figure 2: Three operating models for service management showing distribution of responsibilities

The centralised model concentrates all service management functions in a single team serving the entire organisation. A central service desk handles all contacts regardless of location. Change management, asset management, and other practices operate from headquarters. This model provides consistency, economies of scale, and clear accountability. It works well for organisations under 200 staff with limited geographic distribution and reliable connectivity between locations.

Centralisation creates challenges when time zones span more than eight hours, when connectivity is unreliable, or when local context significantly affects service delivery. A service desk operating from London cannot effectively support staff in Pacific time zones without extended hours or follow-the-sun arrangements. Field locations with intermittent connectivity cannot depend on real-time access to central systems for basic support.

The federated model distributes service management capability to regional or national units that operate semi-autonomously. Each unit maintains its own service desk, handles local changes, and manages local assets. Central coordination establishes policies, standards, and shared platforms, but operational decisions occur locally. This model suits organisations with strong regional autonomy, diverse operating contexts, or more than 500 staff distributed across time zones.

Federation risks inconsistency, duplication of effort, and fragmentation of knowledge. Different regions may solve similar problems independently, missing opportunities to share solutions. Consolidated reporting becomes difficult when regions use different tools or categorisation schemes. Staff transferring between regions encounter different practices and expectations.

The hybrid model allocates functions based on where they are best performed. Strategic planning, enterprise architecture, major change decisions, and core business systems operate centrally. Local support, routine changes, and regional assets operate in distributed units. This model attempts to capture benefits of both approaches: consistency where consistency matters, responsiveness where local context matters.

Hybrid models require clear decision rights and well-defined interfaces between central and distributed functions. Ambiguity about which changes require central approval, which incidents escalate to headquarters, or how local asset decisions align with organisational standards creates friction. Governance mechanisms must address boundary cases explicitly.

Selecting an operating model

The choice of operating model depends on organisational characteristics that can be assessed systematically.

Geographic distribution shapes the feasibility of centralisation. Organisations operating within a single time zone and country can centralise effectively. Organisations spanning twelve time zones with offices in twenty countries cannot provide responsive central support without significant investment in staffing and tools. Calculate the span by identifying earliest and latest local times across locations. Spans exceeding eight hours require either distributed support capability or acceptance of delayed response outside headquarters hours.

Connectivity reliability determines dependence on central systems. Cloud-based ITSM platforms require reliable internet access. Field offices with satellite connectivity, mobile-only access, or frequent outages cannot use central systems for time-sensitive activities. Where 95% of staff have reliable broadband, centralisation works. Where 40% of staff work from locations with intermittent connectivity, distributed capability is essential for those locations.

Local context variation affects whether standardised practices apply. Regulatory requirements differ by country. Technology infrastructure differs between urban headquarters and rural field sites. Language requirements differ by region. Where variation is modest, central practices work with minor adaptation. Where variation is significant, local practices shaped by local conditions deliver better outcomes.

Resource availability constraints drive pragmatic decisions. Centralisation is more resource-efficient when it is feasible. An organisation with three IT staff cannot distribute them across regions. Federation requires IT capability in each region, demanding either more total IT staff or acceptance that some regions operate without dedicated IT presence. The hybrid model attempts to extend limited central resources through local augmentation, using trained non-IT staff or managed services for local support while retaining central expertise for complex issues.

Organisational culture and governance norms influence what models can succeed. Organisations with strong headquarters authority can mandate central processes. Organisations with autonomous country offices that control their own budgets and make independent decisions cannot impose centralisation without significant political effort. The operating model must align with broader organisational governance or invest in changing that governance.

Practice architecture

Service management practices form an interconnected architecture where each practice both provides and consumes capabilities. Understanding these relationships prevents fragmented implementation and reveals dependencies that affect sequencing and investment decisions.

+------------------------------------------------------------------------+
| PRACTICE ARCHITECTURE |
+------------------------------------------------------------------------+
| |
| STRATEGIC LAYER |
| +------------------+ +------------------+ +------------------+ |
| | Service | | Portfolio | | Financial | |
| | Strategy | | Management | | Management | |
| +--------+---------+ +--------+---------+ +--------+---------+ |
| | | | |
| +---------------------+---------------------+ |
| | |
| v |
| DESIGN LAYER |
| +------------------+ +------------------+ +------------------+ |
| | Service | | Service | | Availability | |
| | Catalogue | | Level | | Management | |
| | Management | | Management | | | |
| +--------+---------+ +--------+---------+ +--------+---------+ |
| | | | |
| +---------------------+---------------------+ |
| | |
| v |
| TRANSITION LAYER |
| +------------------+ +------------------+ +------------------+ |
| | Change | | Release | | Knowledge | |
| | Management | | Management | | Management | |
| +--------+---------+ +--------+---------+ +--------+---------+ |
| | | | |
| +---------------------+---------------------+ |
| | |
| v |
| OPERATIONS LAYER |
| +------------------+ +------------------+ +------------------+ |
| | Incident | | Problem | | Request | |
| | Management | | Management | | Fulfilment | |
| +--------+---------+ +--------+---------+ +--------+---------+ |
| | | | |
| | +------------------+------------------+ | |
| | | | | |
| v v v v |
| +------------------+ +------------------+ |
| | Service Desk | | Monitoring and | |
| | Operations | | Event Management | |
| +------------------+ +------------------+ |
| |
| CROSS-CUTTING |
| +------------------+ +------------------+ +------------------+ |
| | Continual | | Configuration | | Asset | |
| | Improvement | | Management | | Management | |
| +------------------+ +------------------+ +------------------+ |
| |
+------------------------------------------------------------------------+

Figure 3: Practice architecture showing layers and dependencies

The layers represent different orientations rather than strict hierarchy. Strategic practices set direction and allocate resources. Design practices translate strategy into service specifications. Transition practices move new or changed services into operation. Operations practices deliver and support services in production. Cross-cutting practices support all layers: configuration management provides the data model, asset management tracks resources, continual improvement drives evolution.

Key dependencies shape implementation sequencing. Incident management requires a service desk to receive and route incidents. Problem management requires incident data to identify patterns. Change management requires configuration management data to assess impact. Service level management requires monitoring to measure achievement. Starting with practices that have fewer dependencies and building toward those with more dependencies creates a stable foundation.

For minimal implementations, incident management and request fulfilment provide immediate value through structured handling of daily work. Change management adds control over modifications. These three practices form a viable starting point. Asset and configuration management provide the data foundation that increases effectiveness of other practices but require investment to establish and maintain.

Governance structure

Service management governance establishes who makes which decisions, under what constraints, and with what accountability. Effective governance prevents both paralysis from excessive approval requirements and chaos from insufficient oversight.

Decision domains fall into three categories: strategic decisions about service portfolio, direction, and major investments; tactical decisions about service design, priorities, and resource allocation; and operational decisions about daily service delivery and routine changes.

Strategic governance occurs through executive leadership engagement with IT. Annual planning cycles, major investment decisions, and service portfolio changes fall here. The governing body depends on organisational structure: a senior leadership team, an IT steering committee, or executive director oversight of the IT function. Strategic decisions require quarterly or annual attention rather than continuous engagement.

Tactical governance occurs through regular interaction between IT leadership and service owners. Service reviews, change advisory boards, and priority-setting forums operate at this level. Meeting cadence ranges from weekly to monthly depending on organisational tempo. Decisions about service levels, change schedules, and resource allocation belong here.

Operational governance occurs within the IT function through team leads, duty managers, and defined procedures. Incident prioritisation, standard change approval, and daily work allocation happen without escalation when they follow established guidelines. Escalation paths exist for exceptions that exceed operational authority.

+------------------------------------------------------------------------+
| GOVERNANCE STRUCTURE |
+------------------------------------------------------------------------+
| |
| [ STRATEGIC LAYER ] |
| +-----------------------------------------------------------------+ |
| | LEADERSHIP / BOARD | |
| | | |
| | Decisions: Portfolio direction, investments, global policy | |
| | Cadence: Quarterly / Annual | |
| | Members: Executives, Trustees, IT Director | |
| +-----------------------------------------------------------------+ |
| | |
| v |
| |
| [ TACTICAL LAYER ] |
| +-----------------------------------------------------------------+ |
| | IT STEERING / SERVICE REVIEW | |
| | | |
| | Decisions: Service priorities, SLA targets, resources | |
| | Cadence: Monthly | |
| | Members: IT Leadership, Dept Heads, Service Owners | |
| +-----------------------------------------------------------------+ |
| | |
| +---------------+---------------+ |
| | | |
| v v |
| +------------------+ +------------------+ |
| | CHANGE ADVISORY | | SERVICE OWNER | |
| | BOARD | | FORUMS | |
| +------------------+ +------------------+ |
| | Decisions: Apprv | | Decisions: Reqs, | |
| | Scheduling | | Acceptance | |
| | Cadence: Weekly | | Cadence: Varies | |
| +------------------+ +------------------+ |
| | | |
| +---------------+---------------+ |
| | |
| v |
| |
| [ OPERATIONAL LAYER ] |
| +-----------------------------------------------------------------+ |
| | IT OPERATIONS | |
| | | |
| | Decisions: Incident priority, standard changes, daily tasks | |
| | Cadence: Continuous | |
| | Authority: Team Leads, Duty Managers, Standard Procedures | |
| +-----------------------------------------------------------------+ |
| |
+------------------------------------------------------------------------+

Figure 4: Governance structure showing decision layers and cadences

Decision rights documentation clarifies authority boundaries. A decision rights matrix specifies which role can make which decisions unilaterally, which require consultation, which require approval, and which are escalated. For example: the IT manager approves changes under 4 hours estimated duration affecting fewer than 50 users; changes exceeding these thresholds require CAB approval; changes affecting donor-facing systems require executive sponsor sign-off regardless of size.

Governance overhead must match organisational capacity. An organisation with two IT staff cannot sustain weekly CAB meetings, monthly steering committees, and quarterly board reviews. Combining functions, reducing meeting frequency, and using asynchronous approval workflows for routine decisions keeps governance sustainable. The test is whether governance enables better decisions without creating bottlenecks that delay value delivery.

Maturity progression

Service management maturity describes the degree to which practices are defined, consistent, measured, and improved. Maturity models provide a framework for assessing current state and planning advancement.

+------------------------------------------------------------------------+
| MATURITY PROGRESSION |
+------------------------------------------------------------------------+
| |
| LEVEL 1: INITIAL |
| +------------------------------------------------------------------+ |
| | Ad-hoc response to issues | |
| | Heroic individuals keep things running | |
| | No documented procedures | |
| | Success depends on specific people | |
| +------------------------------------------------------------------+ |
| | |
| v |
| LEVEL 2: REPEATABLE |
| +------------------------------------------------------------------+ |
| | Basic procedures documented for key activities | |
| | Consistent handling of common scenarios | |
| | Some tracking of work (tickets, logs) | |
| | Knowledge exists but not systematically shared | |
| +------------------------------------------------------------------+ |
| | |
| v |
| LEVEL 3: DEFINED |
| +------------------------------------------------------------------+ |
| | Practices documented and standardised | |
| | Roles and responsibilities clear | |
| | Metrics collected and reported | |
| | Training provided for practice adoption | |
| +------------------------------------------------------------------+ |
| | |
| v |
| LEVEL 4: MANAGED |
| +------------------------------------------------------------------+ |
| | Practices measured against targets | |
| | Quantitative quality goals established | |
| | Root cause analysis drives improvement | |
| | Consistent performance across teams | |
| +------------------------------------------------------------------+ |
| | |
| v |
| LEVEL 5: OPTIMISING |
| +------------------------------------------------------------------+ |
| | Continuous improvement embedded | |
| | Innovation and adaptation routine | |
| | Quantitative process improvement | |
| | Industry-leading practices | |
| +------------------------------------------------------------------+ |
| |
+------------------------------------------------------------------------+

Figure 5: Service management maturity levels showing progression characteristics

Level 1 (Initial) characterises organisations without formal service management. Work happens, but how it happens depends on who is doing it. Knowledge resides in individual heads. Departure of key staff creates significant disruption. Most organisations begin here before deliberately adopting service management practices.

Level 2 (Repeatable) introduces consistency for common scenarios. Password reset follows a documented procedure. Incident handling has defined steps. New staff can handle routine work by following instructions. Unusual situations still depend on experienced staff judgment. This level suffices for organisations with stable, limited service scope.

Level 3 (Defined) standardises practices across the organisation. All teams handle incidents the same way. Roles and responsibilities are documented and understood. Metrics exist and are reported, though not necessarily used for decision-making. Most mission-driven organisations with dedicated IT functions should target Level 3 as a sustainable operating state.

Level 4 (Managed) adds quantitative management. Targets are set, performance is measured against targets, and variation triggers investigation and correction. Improvement becomes data-driven. This level requires sustained investment in measurement and analysis that exceeds what most small and mid-sized organisations can sustain.

Level 5 (Optimising) represents continuous, systematic improvement embedded in how work happens. Reaching this level requires organisational commitment beyond the IT function. Few mission-driven organisations operate here, and doing so is not necessarily appropriate given competing priorities.

Assessment approach

Assessing maturity provides a baseline for improvement planning. Assessment examines each practice against maturity characteristics.

A lightweight assessment suitable for organisations starting their service management journey examines core practices: incident management, change management, and request fulfilment. For each practice, determine: Does a documented procedure exist? Do staff follow it consistently? Are outcomes tracked? Are results reviewed and acted upon?

Scoring each practice on the five-level scale produces a baseline. An organisation might score Level 2 on incident management (basic procedure exists, followed inconsistently), Level 1 on change management (no formal process), and Level 2 on request fulfilment (common requests handled consistently). This profile identifies change management as the most significant gap.

Assessment involves reviewing documentation, interviewing staff, and observing practice. Documentation alone is insufficient: procedures that exist on paper but are not followed indicate Level 1, not Level 3. Staff interviews reveal whether practices are understood and followed. Observation confirms that described practices match actual behaviour.

External assessment provides objectivity but costs money. Self-assessment is free but risks bias. Combining self-assessment with peer review from another team or organisation balances cost and objectivity.

Implementation considerations

Minimal service management

Organisations with one or two IT staff, no dedicated service management tools, and competing priorities need an approach that delivers value without consuming all available capacity. Minimal service management focuses on consistency over comprehensiveness.

Start with a shared mailbox and spreadsheet. All IT requests come to one address, eliminating the informal channels that fragment work. A spreadsheet tracks open requests, who owns them, and when they were received. This simple mechanism provides visibility and prevents work from being forgotten. It costs nothing beyond the discipline to use it.

Document the three most common request types and how to handle them. Password reset, new user setup, and equipment request represent common starting points. Each procedure fits on one page. Staff handling these requests follow the procedure, producing consistent outcomes regardless of who handles the request. Add procedures for other common scenarios over time.

Hold a weekly 30-minute review of open items. What is stuck? What keeps recurring? What should we do differently? This lightweight governance creates accountability and captures improvement opportunities without elaborate meeting structures.

This minimal approach achieves Level 2 maturity for core practices. It suits organisations under 100 staff with stable technology environments and limited IT resources. Advancing beyond this level requires additional investment that must be justified by organisational need.

Scaling for growth

Organisations adding IT staff, expanding services, or increasing user populations must scale service management capability. Growth exposes gaps that were manageable at smaller scale.

Tooling investment becomes worthwhile when the spreadsheet and mailbox create friction. Threshold indicators: more than 50 open items at any time, more than two people handling requests, requests from more than 100 users. Free and low-cost ITSM tools provide ticketing, knowledge bases, and basic reporting without significant investment. Tool selection is covered in the ITSM and Help Desk benchmark.

Role specialisation emerges as workload increases. At two or three IT staff, informal coordination suffices. Beyond three staff, designated responsibilities become necessary: who triages incoming requests, who approves changes, who maintains documentation. These need not be full-time roles; they represent accountability for specific functions.

Process formalisation addresses inconsistency that causes problems. When incidents are handled differently by different staff and users complain about variable service, formalising incident handling becomes worthwhile. When changes cause outages because impacts were not assessed, formalising change management becomes worthwhile. Let problems drive formalisation rather than formalising speculatively.

Governance structures emerge to coordinate growing complexity. A weekly IT team meeting that reviews major incidents, upcoming changes, and resource allocation provides tactical governance. A monthly meeting with department heads to review service performance and gather requirements provides strategic input. Structures should feel lightweight; if meetings feel burdensome, reduce frequency or scope.

Federated environments

Organisations with autonomous country offices, merged entities with separate IT histories, or distributed teams require federation approaches that balance local responsiveness with organisational coherence.

Define what must be consistent versus what can vary locally. Identity and access management, data protection practices, and security controls warrant consistency because variation creates risk. Service desk procedures, hardware choices, and local application support can vary to suit local context. Document these distinctions explicitly.

Establish shared service offerings for economies of scale. Email, collaboration platforms, identity services, and core business applications benefit from central provision even in federated models. Local units consume these services rather than operating their own. Shared service agreements define what central provides and what local units are responsible for.

Create communities of practice that connect distributed IT staff. Monthly calls where regional IT coordinators share challenges and solutions build knowledge networks and prevent isolated reinvention. Shared documentation repositories capture solutions for common problems. These horizontal connections complement vertical reporting relationships.

Accept that consistency is aspirational rather than guaranteed. Federated environments involve negotiation, influence, and gradual alignment rather than mandated uniformity. Central functions provide value by making compliance easier than non-compliance: good standards, useful templates, shared contracts, and expert support make local adoption attractive.

High-risk and field contexts

Organisations operating in conflict zones, unstable regions, or areas with surveillance concerns must adapt service management to these realities.

Availability expectations adjust for context. An 8-hour incident resolution target makes sense for headquarters. It is unrealistic for a field site accessible only by chartered flight. Define service levels appropriate to context and communicate them to users so expectations align with capability.

Continuity planning assumes disruption. Field sites may lose connectivity for days. Evacuation may require abandoning equipment. Staff may change suddenly due to security situations. Service management must accommodate these scenarios: offline-capable systems, documented procedures that new staff can follow, and recovery plans that assume worst cases.

Security classification affects what can be discussed and documented. Incident details involving protection data or sensitive operational information cannot be captured in shared ticketing systems without appropriate controls. Separate handling procedures for classified incidents prevent exposure while maintaining service management discipline.

Local knowledge becomes more critical when central support is unavailable. Staff in remote locations must solve more problems independently. Comprehensive, accessible documentation, offline-available knowledge bases, and training for common scenarios reduce dependence on real-time central support.

See also