Service Desk Operations
A service desk functions as the single point of contact between IT services and the people who use them. The service desk receives contacts through defined channels, routes them to appropriate resolution resources, and tracks outcomes until closure. This function differs from a help desk, which handles break-fix technical issues only, and from a call centre, which processes high-volume transactional interactions. The service desk encompasses incident reporting, service requests, queries, and feedback, serving as both an operational hub and an information source about service health.
- Contact
- Any interaction with the service desk regardless of channel or type. A contact becomes an incident, service request, or information query based on categorisation.
- First Contact Resolution (FCR)
- Resolution achieved during the initial contact without escalation, callback, or follow-up interaction. Measured as a percentage of total contacts.
- Mean Time to Resolve (MTTR)
- Average elapsed time from contact receipt to confirmed resolution across all contacts or a specific category.
- Deflection
- Contacts avoided through self-service, automation, or proactive communication. Calculated as self-service resolutions divided by potential contact volume.
Organisational models
Service desk structure determines staffing efficiency, response capability, and user experience. Four primary models exist, each with distinct characteristics that suit different organisational configurations.
A local service desk operates at each significant site, staffed by personnel physically present at that location. This model provides face-to-face support, handles walk-up requests, and maintains direct relationships with local users. Local desks excel at resolving hardware issues requiring physical intervention and at understanding site-specific systems and processes. The cost structure scales linearly with location count: an organisation with 12 country offices requires 12 service desk functions, even if each handles only 5-10 contacts daily. This model suits organisations where physical presence drives resolution capability or where connectivity between locations is unreliable.
A centralised service desk consolidates all support into a single location serving all users regardless of geography. Staff members handle contacts from across the organisation through remote channels. This model achieves economies of scale: a team of 8 analysts can serve 800 users across 15 locations, whereas local desks might require 15-20 staff for equivalent coverage. Centralised desks develop deeper specialisation because analysts encounter diverse issues repeatedly. The model requires reliable connectivity from all locations and acceptance that physical intervention requires either local IT presence for hardware or shipping logistics.
+------------------------------------------------------+| CENTRALISED SERVICE DESK || (Single Location) |+------------------------------------------------------+| || +--------------------+ +--------------------+ || | Analyst Pool | | Specialist Pool | || | (Generalists) | | (Applications, | || | | | Infrastructure) | || | - First contact | | | || | - Triage | | - Complex issues | || | - Known issues | | - Escalations | || +----------+---------+ +---------+----------+ || | | || +-----------+-----------+ || | || +--------v--------+ || | Queue Manager | || +-----------------+ || |+------------------------------------------------------+ | | | | v v v v +---------+ +---------+ +---------+ +---------+ | HQ | | Region A| | Region B| | Field | | Users | | Offices | | Offices | | Sites | +---------+ +---------+ +---------+ +---------+Figure 1: Centralised service desk serving distributed locations
A virtual service desk distributes staff across multiple locations while presenting a unified service to users. Contacts route to available analysts regardless of their physical location, with workload distributed through queue management systems. This model combines local presence benefits with centralised efficiency. An organisation might position analysts in three time zones, providing 16-hour coverage without shift premiums. Virtual desks require robust telephony and ticketing systems that route contacts intelligently and provide consistent information to all analysts regardless of location.
A follow-the-sun model extends virtual desk principles to provide continuous coverage by positioning teams in time zones that together span 24 hours. A three-team configuration with locations 8 hours apart provides seamless handover: the Nairobi team (UTC+3) covers 06:00-14:00 UTC, the London team (UTC+0) covers 14:00-22:00 UTC, and the Manila team (UTC+8) covers 22:00-06:00 UTC. This arrangement requires standardised processes, shared documentation, and clear handover procedures because an issue opened by one team may be resolved by another.
Model selection depends on user distribution, connectivity reliability, physical support requirements, and budget constraints. Organisations operating in contexts with unreliable internet connectivity often maintain local service desk presence even when centralisation would otherwise be more efficient. Those with predictable connectivity and minimal hardware support requirements benefit from centralisation.
Channel strategy
Users reach the service desk through channels that vary in cost, user preference, resolution speed, and information richness. Effective channel strategy balances accessibility against operational efficiency.
Email remains prevalent due to user familiarity and asynchronous convenience. Users compose requests when convenient and receive responses without real-time availability. Email carries rich information including attachments, screenshots, and forwarded context. Response expectations vary: organisations set targets from 4 hours to 24 hours for initial acknowledgement. Email disadvantages include difficulty categorising and routing unstructured text, potential for incomplete information requiring follow-up, and lack of real-time interaction for complex issues.
Telephone provides immediate interaction and rapid clarification. Analysts gather information through questioning, guide users through diagnostic steps, and often resolve issues during the call. Phone support costs 3-5 times more per contact than email due to analyst time commitment but achieves higher first contact resolution rates (60-75% versus 30-45% for email). Telephone requires staffing for peak periods and creates queuing challenges during high-volume periods.
Self-service portals present structured forms that guide users through categorisation and information provision. A well-designed portal collects required details upfront: asset tag, error message, steps to reproduce. This structured capture improves routing accuracy and reduces analyst time spent gathering information. Portals integrate with the service catalogue, showing available services and their fulfilment times. Portal adoption requires investment in user communication and interface design; poorly designed portals drive users back to email.
Chat provides real-time text interaction with shorter session times than telephone. Analysts can handle 2-3 concurrent chat sessions versus one phone call, improving efficiency. Chat suits quick queries and straightforward issues but struggles with complex problems requiring extended investigation. Chat platforms vary from basic web chat to integration with collaboration tools like Microsoft Teams or Slack.
Walk-up support serves users at physical locations through face-to-face interaction. Walk-up suits device issues, procurement questions, and situations where showing is easier than describing. Tech bars or scheduled drop-in sessions provide structured walk-up without requiring dedicated desk staffing throughout operating hours.
+----------------------------------------------------------------+| CONTACT RECEIPT |+----------------------------------------------------------------+ | v +---------------+---------------+ | | v v +---------+--------+ +---------+--------+ | Structured | | Unstructured | | (Portal, Chat) | | (Email, Phone) | +--------+---------+ +---------+--------+ | | v v +--------+---------+ +---------+--------+ | Auto-categorise | | Analyst triage | | from form fields | | and categorise | +--------+---------+ +---------+--------+ | | +----------------+---------------+ | v +----------+----------+ | Category + Priority | +----------+----------+ | +------------------+------------------+ | | | v v v +------+------+ +------+------+ +------+------+ | Self-service| | Analyst | | Specialist | | resolution | | resolution | | escalation | +-------------+ +-------------+ +-------------+Figure 2: Channel routing and categorisation flow
Channel costs vary substantially. A service request through self-service portal with automated fulfilment costs under £1. The same request via email costs £8-12 in analyst time. Via telephone, £15-25. Walk-up support costs £20-30 per interaction including analyst travel or dedicated space costs. These figures drive channel shift strategies that encourage users toward lower-cost channels for appropriate interaction types while preserving high-touch channels for complex needs.
Channel strategy for field contexts requires additional consideration. Locations with intermittent connectivity cannot rely on real-time channels. Email remains functional even with delayed synchronisation. Offline-capable self-service portals that sync when connected provide structured capture without connectivity dependency. Satellite phone or radio backup supports critical issues when primary channels fail.
Staffing models and skills
Service desk staffing balances breadth of coverage against depth of expertise. The total required capacity derives from contact volume, handle time, and service level targets.
The Erlang C formula calculates required staff for a target service level. For a desk receiving 50 contacts per hour with 10-minute average handle time and a target of answering 80% of contacts within 60 seconds, the calculation indicates 12 analysts required. This formula accounts for queuing dynamics where arrival patterns are random but service must meet consistent targets.
Worked example for a centralised desk:
- Daily contact volume: 180 contacts
- Operating hours: 9 hours (08:00-17:00)
- Contacts per hour: 20
- Average handle time: 12 minutes
- Target: 80% answered within 90 seconds
- Required analysts: 5-6 at steady state, 7-8 during peaks
Staffing models address coverage requirements differently. A single-person IT function handling service desk duties alongside other responsibilities cannot provide continuous availability. Such configurations use asynchronous channels (email, portal) as primary contact methods, with telephone available during defined hours. Response targets adjust accordingly: 4-hour acknowledgement rather than 90-second answer.
Skill requirements span technical knowledge, communication ability, and service orientation. Frontline analysts require breadth across common technologies: productivity applications, authentication systems, basic networking, mobile devices. They need diagnostic questioning skills to gather relevant information and interpersonal skills to manage frustrated users. Technical depth comes from specialists who handle escalations.
Training pathways develop these skills progressively. New analysts shadow experienced colleagues for 2-4 weeks, handling contacts with supervision. Competency frameworks define expected capabilities at each level: a Level 1 analyst resolves password issues and application queries; Level 2 handles account provisioning and basic system configuration; Level 3 addresses infrastructure issues and complex integrations. Progression requires demonstrated competency plus passing assessments or certifications.
+------------------------------------------------------------------+| TIERED SUPPORT MODEL |+------------------------------------------------------------------+| || USER CONTACTS || | || v || +----+----+ +-----------------------------------------+ || | TIER 0 | | Self-Service | || | |---->| - Password reset | || | 40-60% | | - Knowledge base | || +---------+ | - Automated provisioning | || | +-----------------------------------------+ || | Unresolved || v || +----+----+ +-----------------------------------------+ || | TIER 1 | | Service Desk Analysts | || | |---->| - Known issue resolution | || | 60-75% | | - Guided troubleshooting | || | FCR | | - Request logging and routing | || +---------+ | - Basic account management | || | +-----------------------------------------+ || | Escalate || v || +----+----+ +-----------------------------------------+ || | TIER 2 | | Technical Specialists | || | |---->| - Application support | || | 85-95% | | - Infrastructure issues | || | Cum. | | - Complex troubleshooting | || +---------+ | - Configuration changes | || | +-----------------------------------------+ || | Escalate || v || +----+----+ +-----------------------------------------+ || | TIER 3 | | Engineering / Vendors | || | |---->| - Code fixes | || | 100% | | - Architecture issues | || | | | - Vendor escalation | || +---------+ +-----------------------------------------+ || |+------------------------------------------------------------------+Figure 3: Tiered support structure with cumulative resolution rates
An alternative to tiered support is the swarming model, where contacts go directly to the analyst best positioned to resolve them based on skills and availability. Swarming eliminates handoffs between tiers, reducing resolution time for complex issues. However, swarming requires higher average skill levels across all analysts and sophisticated routing to match contacts with appropriate resources. Swarming suits smaller teams where all members have broad competency; tiered models suit larger teams where specialisation improves efficiency.
Queue management and workload distribution
Queue management determines which contacts analysts handle and in what order. Effective queue management balances service level achievement against analyst utilisation and fair workload distribution.
Priority-based queuing processes high-priority contacts before lower-priority ones regardless of arrival order. A P1 incident affecting multiple users takes precedence over a P3 service request submitted earlier. This approach achieves service level targets for critical issues but can starve lower-priority contacts during busy periods. Starvation prevention mechanisms bump contact priority after defined wait times: a P4 request waiting over 4 hours might elevate to P3 processing priority.
Skills-based routing directs contacts to analysts with relevant competencies. A contact categorised as “Finance System” routes to analysts with finance application training. This improves first contact resolution because contacts reach analysts equipped to resolve them. Skills-based routing requires accurate contact categorisation and maintained skills matrices for analysts.
Round-robin distribution assigns contacts to analysts in rotation, achieving even workload distribution. Pure round-robin ignores analyst skills and contact complexity, so hybrid approaches combine rotation with skills matching.
Pull-based assignment lets analysts select contacts from queues rather than having contacts pushed to them. Analysts choose based on their current capacity and expertise. This model works well with experienced teams who manage their own workload effectively but can result in difficult contacts being avoided.
Workload metrics track distribution effectiveness. Contacts per analyst per day should fall within expected ranges (15-25 for a mixed workload). Significant variation indicates routing problems or skill gaps. Handle time tracking identifies contacts consuming disproportionate analyst time, flagging candidates for knowledge article creation or process improvement.
Contact handling
Contact handling follows consistent patterns that ensure thorough information gathering, accurate categorisation, and complete documentation.
Initial contact receipt captures user identity, contact method, and timestamp automatically where possible. Portal and email contacts include this metadata inherently. Phone contacts require analyst capture during greeting.
Information gathering follows structured questioning to establish the issue or request. For incidents, analysts determine what happened, when, what the user was attempting, what error messages appeared, and what troubleshooting the user already attempted. For requests, analysts confirm what is needed, for whom, with what urgency, and under what authorisation.
Categorisation assigns the contact to a service, category, and subcategory that determine routing and reporting. Consistent categorisation enables trend analysis: if “Email - Cannot Send” contacts doubled this month, investigation targets email sending problems specifically. Categorisation schemes balance granularity against usability; 50-100 categories suit most organisations, with excessive granularity causing inconsistent classification.
Priority assignment combines impact and urgency. Impact measures breadth of effect: a single user, a team, a department, the entire organisation. Urgency measures time sensitivity: whether workarounds exist, whether deadlines approach, whether safety is affected. A matrix combines these factors into priority levels that determine response and resolution targets.
| Impact | High Urgency | Medium Urgency | Low Urgency |
|---|---|---|---|
| Organisation-wide | P1 | P1 | P2 |
| Department/function | P1 | P2 | P3 |
| Multiple users | P2 | P3 | P3 |
| Single user | P3 | P3 | P4 |
Resolution attempts follow knowledge base articles where available, progressing through documented troubleshooting steps. Analysts document actions taken and outcomes observed, building the contact record. When resolution exceeds analyst capability or authority, escalation transfers the contact with accumulated context to the next tier or specialist group.
Closure confirms resolution with the user and documents the solution applied. Closure categories indicate resolution type: fixed, workaround provided, request completed, no fault found, user training provided. These categories support analysis of resolution patterns and identification of recurring issues requiring permanent fixes.
+-------------------------------------------------------------------+| CONTACT LIFECYCLE |+-------------------------------------------------------------------+| || +----------+ +----------+ +----------+ +----------+ || | | | | | | | | || | Receipt +---->| Triage +---->| Work +---->| Resolve | || | | | | | | | | || +----+-----+ +----+-----+ +----+-----+ +----+-----+ || | | | | || v v v v || +----------+ +----------+ +----------+ +----------+ || | Log | | Assign | | Diagnose | | Confirm | || | contact | | category | | & act | | with | || | | | priority | | | | user | || | Capture | | owner | | Update | | | || | details | | | | record | | Close | || +----------+ +----------+ +----------+ +----------+ || | || | If escalate || v || +-----------+ || | Transfer | || | to Tier 2 | || | or | || | Specialist| || +-----------+ || |+-------------------------------------------------------------------+Figure 4: Contact lifecycle from receipt through resolution
Self-service and deflection
Self-service enables users to resolve issues or fulfil requests without analyst interaction. Effective self-service reduces contact volume while maintaining or improving user satisfaction for appropriate interaction types.
Password reset represents the highest-value deflection opportunity. Password-related contacts often constitute 20-30% of service desk volume. Self-service password reset with appropriate identity verification (security questions, mobile verification, manager approval) eliminates these contacts entirely. Implementation requires identity provider configuration and user enrollment in verification methods.
Knowledge bases provide searchable articles addressing common questions and known issues. Effective knowledge bases present solutions in user-appropriate language, avoiding technical jargon. Article structure follows consistent patterns: symptom description, cause explanation, step-by-step resolution. Search must surface relevant articles for the terms users actually employ, which often differ from technical terminology.
Service catalogues present requestable services with clear descriptions, fulfilment times, and approval requirements. Users browse or search for services, complete structured request forms, and track progress through defined workflows. Catalogue-based requests capture required information upfront, eliminating analyst follow-up and enabling automated fulfilment for standard items.
Chatbots and virtual agents provide conversational interfaces that guide users through troubleshooting or request submission. Rule-based chatbots follow decision trees to narrow problems and suggest solutions. AI-enhanced agents use natural language processing to understand queries and provide contextual responses. Chatbots work well for high-volume, predictable queries but frustrate users when they cannot recognise or resolve non-standard situations. Clear escalation to human analysts remains essential.
Deflection rates measure self-service effectiveness. The calculation divides self-service resolutions by total potential contacts (self-service plus analyst-handled). A deflection rate of 40% means four of every ten potential contacts resolved without analyst involvement. Deflection targets vary by service type: password reset might target 90% deflection; complex application support might target 10%.
Deflection strategy risks creating barrier perception if self-service becomes mandatory rather than convenient. Users who encounter forced self-service for unsuitable issues develop negative service perception. Channel design should make self-service the easy path for appropriate contacts while ensuring analyst access remains available.
Metrics and quality assurance
Service desk metrics track operational performance, identify improvement opportunities, and demonstrate value to stakeholders. Metrics divide into efficiency measures, effectiveness measures, and experience measures.
Efficiency metrics track resource utilisation:
- Contacts per analyst per day (target varies by complexity; 18-25 typical)
- Average handle time by channel and category
- Cost per contact by channel
- Utilisation rate (contact-handling time divided by available time; 65-75% healthy)
Effectiveness metrics track resolution capability:
- First contact resolution rate (target 65-75% for general desk)
- Escalation rate (lower indicates analyst capability)
- Reopened contacts (indicates premature closure)
- Mean time to resolve by priority and category
- Service level achievement (percentage meeting response and resolution targets)
Experience metrics track user satisfaction:
- Customer satisfaction (CSAT) from post-contact surveys (target 85%+)
- Net Promoter Score (NPS) measuring likelihood to recommend
- Complaint rate
- Abandonment rate for telephone channel
Worked example of metric calculation for a monthly period:
| Metric | Calculation | Result |
|---|---|---|
| Total contacts | Count | 2,847 |
| Contacts resolved at first contact | Count | 1,852 |
| First contact resolution rate | 1,852 / 2,847 | 65.1% |
| Contacts meeting SLA | Count | 2,562 |
| SLA achievement | 2,562 / 2,847 | 90.0% |
| Total handle time (hours) | Sum | 712 |
| Average handle time (minutes) | (712 × 60) / 2,847 | 15.0 |
| Analyst days worked | Sum | 168 |
| Contacts per analyst day | 2,847 / 168 | 16.9 |
| Surveys returned | Count | 847 |
| Satisfied responses | Count | 738 |
| CSAT | 738 / 847 | 87.1% |
Quality assurance validates that contacts receive appropriate handling. Contact review samples a percentage of closed contacts (typically 3-5%) for assessment against quality criteria: complete information capture, accurate categorisation, appropriate resolution, professional communication, policy compliance. Reviewers score contacts and provide feedback to analysts. Quality scores integrate into performance management and identify training needs.
Call monitoring for telephone channels provides real-time quality assessment. Team leads listen to live or recorded calls, scoring against criteria including greeting, questioning technique, resolution approach, and closure. Monitoring frequency varies: daily for new analysts, weekly for experienced staff.
Remote and field support
Organisations with distributed operations face service desk challenges beyond typical remote support. Field locations may have limited connectivity, minimal local IT presence, and equipment operating in harsh conditions.
Connectivity constraints affect both contact channels and resolution methods. Remote desktop tools require bandwidth that may be unavailable. Alternative approaches include:
- Telephone-guided troubleshooting with clear verbal instructions
- WhatsApp or similar messaging for asynchronous communication with screenshots
- Pre-positioned documentation in offline-accessible formats
- Scheduled connectivity windows for remote sessions
Local support networks extend service desk reach through trained non-IT staff. These focal points handle basic troubleshooting, coordinate equipment logistics, and facilitate communication between field users and central IT. Focal points need clear scope definition: what they handle directly, what they escalate, and how escalation works when standard channels are unavailable.
Equipment logistics for field repairs require planning that accounts for shipping times, customs procedures, and secure transport. Maintaining buffer stock at regional hubs reduces resolution time for hardware failures. Swap pools of configured devices enable rapid replacement while failed units undergo repair.
Time zone coverage ensures support availability aligns with operational hours across locations. A desk operating London hours (09:00-17:00 UTC) provides limited coverage for East Africa (12:00-20:00 EAT) and poor coverage for Southeast Asia (17:00-01:00 ICT). Extended hours, follow-the-sun models, or regional support tiers address coverage gaps.
Implementation considerations
Service desk implementation scales with organisational complexity and resources. The guidance below addresses distinct contexts.
For organisations with a single IT person managing technology alongside other duties, formal service desk operation is unrealistic. Practical approaches include: establishing a single contact address (email or form) for all IT requests; setting clear expectations about response times (acknowledgement within one working day, resolution based on priority); maintaining a simple tracking system (spreadsheet or basic ticketing tool); documenting resolutions for reference when similar issues recur. The individual handles contacts between other responsibilities, prioritising based on impact. Self-service for password reset and software requests reduces contact volume.
For small IT teams of 2-5 people, dedicated service desk staffing becomes feasible during core hours. One team member handles incoming contacts while others work on projects or specialist tasks, rotating to distribute the interruption load. Simple ticketing systems (osTicket, Zammad, or basic ITSM tools) provide tracking and reporting without administrative overhead. Knowledge base development starts with the 20 most common issues, expanding based on contact patterns.
For established IT functions with defined roles, full service desk operation with tiered support, multiple channels, and comprehensive metrics becomes achievable. Service desk staff specialise in contact handling; technical specialists focus on complex issues. ITSM platforms provide workflow automation, SLA management, and detailed analytics. Quality assurance programmes maintain service standards. Self-service investment generates measurable deflection. Regular reporting demonstrates service delivery performance to stakeholders.
Technology selection reflects these contexts. A single IT person needs minimal tooling: shared mailbox plus spreadsheet tracking suffices. Small teams benefit from open-source ITSM tools that provide ticketing without significant cost or complexity. Larger operations require platforms with automation, integration, and reporting capabilities. The Benchmarks collection provides detailed comparison of ITSM and help desk platforms.
Runbook template
Service desk analysts use runbooks to guide resolution of recurring issues or execution of standard tasks. The following template provides structure for runbook documentation.
Runbook structure
Title: [Specific, searchable name for the issue or task]
Runbook ID: [Unique identifier, e.g., RB-SD-042]
Version: [Version number] | Last updated: [Date] | Author: [Name]
Purpose: [Single sentence describing what this runbook addresses]
Scope: [What situations this runbook applies to; what it does not cover]
Symptoms / Trigger conditions
[Observable indicators that this runbook applies. What the user reports, what error messages appear, what conditions exist.]
Prerequisites
[Access requirements, tools needed, information to gather before starting]
- [Prerequisite 1]
- [Prerequisite 2]
Resolution steps
[First step with specific detail]
- Expected outcome: [What should happen]
- If this fails: [Alternative or escalation path]
[Second step]
- Expected outcome:
- If this fails:
[Continue as needed]
Verification
[How to confirm the issue is resolved]
- [Verification step 1]
- [Verification step 2]
Escalation
[When to escalate rather than continue troubleshooting]
- Escalate to: [Team or role]
- Information to include: [Required details for escalation]
Related information
- Related runbooks: [Links]
- Knowledge articles: [Links]
- System documentation: [Links]
Runbook example
Title: Microsoft 365 authentication failure after password change
Runbook ID: RB-SD-017
Version: 2.1 | Last updated: 2024-11-15 | Author: Service Desk Team
Purpose: Resolve authentication failures in Microsoft 365 applications occurring immediately after user password changes.
Scope: Applies to desktop Outlook, Teams, and OneDrive on Windows devices. Does not cover browser access or mobile devices.
Symptoms / Trigger conditions
- User reports Outlook, Teams, or OneDrive prompting for credentials repeatedly after password change
- Applications show “Sign in” or “We couldn’t sign you in” errors
- User confirms password change completed successfully
- Issue started immediately after or within 24 hours of password change
Prerequisites
- Access to user’s device (remote desktop or telephone guidance)
- Confirmation that user knows their new password
- Confirmation that password change completed successfully (user can sign into web applications)
Resolution steps
Clear cached credentials from Windows Credential Manager
- Open Control Panel > User Accounts > Credential Manager
- Select “Windows Credentials”
- Locate entries containing “Microsoft” or “Office” or “mso”
- Remove each relevant entry
- Expected outcome: Credentials removed without error
- If this fails: Proceed to step 2; may need local admin rights
Sign out of Microsoft 365 applications
- Open any Office application (Word, Excel)
- Select File > Account
- Click “Sign out” next to user name
- Confirm sign out
- Expected outcome: Account shows “Sign in” option
- If this fails: Close application, end task in Task Manager, retry
Clear Office identity cache
- Close all Office applications
- Navigate to: %localappdata%\Microsoft\Office\16.0\
- Delete the folder named “TokenCache” if present
- Expected outcome: Folder deleted or not present
- If this fails: Ensure all Office processes are closed
Restart the device
- Perform full restart (not shutdown and power on)
- Expected outcome: Device restarts normally
Sign in to Microsoft 365
- Open Outlook
- Enter email address when prompted
- Enter new password
- Complete MFA if prompted
- Expected outcome: Outlook connects and synchronises
- If this fails: Escalate per escalation section
Verification
- Outlook displays mailbox contents and sends/receives email
- Teams shows contacts and chat history
- OneDrive sync icon shows normal status (not paused or error)
- Issue does not recur over 24-hour period
Escalation
Escalate to Tier 2 Identity team if:
- Steps above do not resolve the issue
- User reports this is a recurring problem (third occurrence within 90 days)
- Multiple users report the same issue simultaneously
Information to include:
- Steps completed and outcomes
- Error messages observed (screenshots if available)
- Device name and operating system version
- Whether device is Entra ID joined, hybrid joined, or registered
Related information
- Related runbooks: RB-SD-003 (Password reset), RB-SD-024 (MFA issues)
- Knowledge articles: KB-1847 (Microsoft 365 credential management)
- System documentation: Microsoft 365 administration guide, section 4.7