Skip to main content

Knowledge Management

Knowledge management is the discipline of capturing what IT staff know, structuring it for retrieval, and maintaining it as services and technologies change. For mission-driven organisations where staff turnover, funding cycles, and distributed operations create persistent risks to institutional memory, knowledge management transforms individual expertise into organisational capability that survives personnel transitions.

The core mechanism is straightforward: when someone solves a problem, they document the solution in a structured format that others can find and apply. This single act, repeated systematically across an IT function, compounds into a knowledge base that accelerates incident resolution, reduces escalations, enables user self-service, and preserves hard-won understanding of legacy systems and local configurations.

Knowledge base
A searchable repository of structured articles containing solutions, procedures, reference information, and answers to common questions. Distinguished from document management by its focus on findable, actionable content units rather than document storage.
Knowledge article
A single unit of knowledge addressing one topic, problem, or procedure. Structured with consistent metadata, formatting, and lifecycle status to enable search and maintenance.
Knowledge-centred service (KCS)
A methodology that integrates knowledge capture into the problem-solving workflow rather than treating documentation as a separate activity. Service desk analysts create and update articles while resolving incidents.
Tacit knowledge
Understanding that exists in people’s heads but has not been documented. Often includes contextual information, workarounds, and institutional history that formal systems do not capture.
Explicit knowledge
Knowledge that has been articulated and recorded in a retrievable format. The goal of knowledge management is converting tacit knowledge to explicit knowledge without losing essential context.
Knowledge lifecycle
The stages a knowledge article passes through from creation to retirement: draft, review, published, flagged for review, archived, and retired.

Knowledge as Organisational Capital

IT knowledge exists in three forms, each requiring different management approaches. Procedural knowledge describes how to perform specific tasks: resetting passwords, configuring VPN clients, provisioning new users. This knowledge changes when systems change and requires tight coupling to change management processes to stay current. Diagnostic knowledge captures the relationship between symptoms and causes: when users report slow email, the cause might be network congestion, mailbox size limits, or client misconfiguration, and experienced analysts develop pattern recognition that new staff lack. Contextual knowledge encompasses the institutional memory that explains why systems are configured in particular ways, what has been tried before, and what constraints shaped current architecture.

The value of managed knowledge compounds over time. A single documented solution that prevents one escalation per week saves 52 escalations annually. At 30 minutes per escalation for second-level support, that single article recovers 26 hours of skilled staff time per year. Multiply this across hundreds of articles and the cumulative effect transforms service desk economics.

For organisations with field operations, knowledge management addresses a specific challenge: expertise concentrated at headquarters while problems occur in field offices across multiple time zones. A well-structured knowledge base enables field IT staff to resolve issues independently during their working hours without waiting for headquarters support. This operational autonomy directly improves service delivery in environments where communication delays and connectivity constraints make real-time escalation impractical.

Knowledge Lifecycle

Every knowledge article follows a lifecycle from initial creation through eventual retirement. Understanding this lifecycle is essential because the management activities at each stage differ, and neglecting any stage degrades the entire knowledge base.

+------------------+ +------------------+ +------------------+
| | | | | |
| DRAFT +---->+ REVIEW +---->+ PUBLISHED |
| | | | | |
| Author creates | | SME validates | | Available to |
| initial content | | technical | | all users |
| | | accuracy | | |
+------------------+ +------------------+ +--------+---------+
|
+------------------------------------------------+
| |
| Scheduled review User feedback |
| or system change or flagged |
| |
v v
+--------+---------+ +----------+-------+
| | | |
| FLAGGED FOR | | ARCHIVED |
| REVIEW | | |
| | | Retained for |
| Content may be |<--------------------------+ reference but |
| outdated | If still relevant | not in active |
| | after review | search results |
+--------+---------+ +----------+-------+
| |
| If no longer applicable |
v v
+--------+---------+ +----------+-------+
| | | |
| RETIRED | | DELETED |
| | | |
| Removed from | | Permanently |
| knowledge base | | removed (rare) |
+------------------+ +------------------+

Figure 1: Knowledge article lifecycle showing status transitions and triggers

The draft stage captures initial content, typically during or immediately after problem resolution. Draft articles are visible only to their author and designated reviewers. The critical principle here is that capturing something imperfect immediately is more valuable than planning to capture something perfect later. Most tacit knowledge never becomes explicit because the capture moment passes while waiting for time to document thoroughly.

Review validates technical accuracy and completeness. For straightforward articles, peer review by another service desk analyst suffices. For articles involving security implications, infrastructure changes, or complex integrations, subject matter expert review is required. Review should verify not just accuracy but findability: will someone with this problem find this article using the search terms they would naturally use?

Published articles are visible to their intended audience, which varies by article type. Internal IT articles are visible to IT staff. Self-service articles are visible to all users. The publishing decision includes setting the review date, typically 6 to 12 months for stable content and 3 months for content related to rapidly changing systems.

Flagged for review indicates that content may be outdated. Articles enter this status through three mechanisms: scheduled review dates trigger automatic flagging, users can flag articles as potentially incorrect, and change management integration can flag articles linked to changed configuration items. Flagged articles remain visible but display a warning to users.

Archived articles are retained in the system but excluded from standard search results. Archiving is appropriate when content is no longer actively needed but has historical or reference value. For example, an article about a decommissioned system might be archived rather than deleted in case questions arise during data migration or audit.

Retired articles are removed from the knowledge base entirely. Retirement requires explicit decision because it destroys content that cannot be recovered. Most organisations find that archiving handles nearly all cases and retirement is rare.

Knowledge Base Architecture

The architecture of a knowledge base determines how effectively users can find relevant content. Two structural decisions dominate: the organisation of content into categories and the metadata attached to individual articles.

+------------------------------------------------------------------------+
| KNOWLEDGE BASE |
+------------------------------------------------------------------------+
| |
| +---------------------------+ +---------------------------+ |
| | SERVICE DESK KB | | SELF-SERVICE KB | |
| | (IT staff only) | | (All users) | |
| +---------------------------+ +---------------------------+ |
| |
| +------------------------------------------------------------------+ |
| | CATEGORY TAXONOMY | |
| +------------------------------------------------------------------+ |
| | | |
| | +---------------+ +---------------+ +---------------+ | |
| | | Applications | | Infrastructure| | Security | | |
| | +-------+-------+ +-------+-------+ +-------+-------+ | |
| | | | | | |
| | +-----+-----+ +-----+-----+ +-----+-----+ | |
| | | | | | | | | | | | |
| | v v v v v v v v v | |
| | Email CRM ERP Network Mail Mail Access Mail Mail | |
| | Servers Cloud On Ctrl Sec Mail | |
| | Prem | |
| +------------------------------------------------------------------+ |
| |
| +------------------------------------------------------------------+ |
| | ARTICLE TYPES | |
| +------------------------------------------------------------------+ |
| | | |
| | +------------+ +------------+ +------------+ +------------+ | |
| | | How-to | | Trouble- | | FAQ | | Reference | | |
| | | | | shooting | | | | | | |
| | | Step-by- | | Symptom | | Question | | Specs, | | |
| | | step | | to cause | | and | | configs, | | |
| | | procedures | | to fix | | answer | | settings | | |
| | +------------+ +------------+ +------------+ +------------+ | |
| | | |
| +------------------------------------------------------------------+ |
| |
+------------------------------------------------------------------------+

Figure 2: Knowledge base structure showing audience separation, category taxonomy, and article types

Category taxonomy provides the primary organisational structure. A two-level taxonomy balances navigation simplicity against categorisation precision. The first level aligns with service areas or technology domains. The second level identifies specific systems or components. Deeper taxonomies create categorisation disputes and navigation complexity without proportional findability benefits.

The taxonomy should reflect how users think about their problems, not how IT organises its teams. Users experiencing email issues think “email” not “messaging infrastructure” or “Exchange administration”. When taxonomy mirrors internal IT structure rather than user mental models, findability suffers.

Article types impose consistent structure within categories. Each type has a defined template that guides authors and sets user expectations.

How-to articles describe procedures users or IT staff perform. They follow task structure: prerequisites, numbered steps, verification that the task succeeded. How-to articles answer questions of the form “How do I…?” Examples: how to set up email on a mobile device, how to request a new software licence, how to configure VPN on a personal laptop.

Troubleshooting articles address specific problems. They follow a symptom-cause-resolution pattern that matches how users experience issues. The symptom section describes what the user observes in their terms. The cause section explains why this happens. The resolution section provides steps to fix the problem. Troubleshooting articles answer questions of the form “Why is…?” or “X is not working”. Examples: Outlook not syncing after password change, VPN connection drops after 10 minutes, shared drive not appearing.

FAQ articles capture frequent questions with concise answers. They suit situations where users need quick answers to common questions rather than detailed procedures. FAQ articles work well for policy clarification, service availability, and entitlement questions. Examples: what is the file size limit for email attachments, which software is approved for video conferencing, how long are deleted files retained.

Reference articles provide specifications, configuration values, and lookup information that users consult repeatedly. Unlike how-to articles, reference articles do not guide action but provide data needed during action. Examples: standard laptop specifications, network port assignments, software version compatibility matrix.

Article Metadata

Beyond content and type, each article carries metadata that enables search, maintenance, and governance. The minimum viable metadata set includes: title, article type, category, keywords, audience, author, creation date, last reviewed date, next review date, related configuration items, and lifecycle status.

Title is the single most important findability factor. Titles should describe the article’s purpose using terms users would search, not internal jargon. “Reset forgotten password” outperforms “AD password reset procedure” because users search for their problem, not the technical solution.

Keywords supplement title-based search. Include synonyms, common misspellings, and related terms. For an article about email not syncing, keywords might include: Outlook, mail, synchronisation, sync, not updating, inbox empty, missing messages.

Audience determines visibility. Articles for IT staff only contain technical detail and system access information inappropriate for general users. Self-service articles are written for non-technical users and avoid jargon, acronyms, and assumptions about technical knowledge.

Related configuration items link articles to the CMDB entries for affected systems. This linkage enables automatic flagging when changes occur to related systems, prompting review of potentially affected knowledge.

Knowledge-Centred Service

Knowledge-centred service is a methodology that integrates knowledge capture into problem-solving rather than treating documentation as a separate activity performed after the fact. The fundamental insight is that the moment of solving a problem is the optimal moment to capture the solution, because context is fresh and motivation is highest.

+------------------------------------------------------------------------+
| KCS DOUBLE-LOOP PROCESS |
+------------------------------------------------------------------------+
| |
| SOLVE LOOP (Individual Interaction) |
| +----------------------------------------------------------------+ |
| | | |
| | +----------+ +----------+ +----------+ +----------+ | |
| | | | | | | | | | | |
| | | Capture +--->+ Structure+--->+ Reuse +--->+ Improve | | |
| | | | | | | | | | | |
| | | Search | | If new, | | Apply | | Flag or | | |
| | | for | | create | | existing | | correct | | |
| | | existing | | in | | article | | article | | |
| | | article | | standard | | to | | if | | |
| | | | | template | | resolve | | wrong | | |
| | +----------+ +----------+ +----------+ +----------+ | |
| | | |
| +----------------------------------------------------------------+ |
| | |
| | Aggregate feedback |
| v |
| EVOLVE LOOP (Organisational Learning) |
| +----------------------------------------------------------------+ |
| | | |
| | +-------------+ +-------------+ +-------------+ | |
| | | | | | | | | |
| | | Analyse +--->+ Improve +--->+ Integrate | | |
| | | | | | | | | |
| | | Patterns | | Content | | With | | |
| | | in article | | and | | service | | |
| | | usage, | | structure | | management | | |
| | | gaps, | | based on | | and | | |
| | | quality | | patterns | | improvement | | |
| | +-------------+ +-------------+ +-------------+ | |
| | | |
| +----------------------------------------------------------------+ |
| |
+------------------------------------------------------------------------+

Figure 3: KCS double-loop showing individual solve loop and organisational evolve loop

The solve loop operates during every service interaction. When a user contacts the service desk, the analyst’s first action is searching the knowledge base for existing content. If an article exists, the analyst uses it to resolve the issue, verifying accuracy in the process. If the article is incomplete or incorrect, the analyst updates it. If no article exists and the issue is likely to recur, the analyst creates a draft article while solving the problem.

The evolve loop operates at organisational level, using aggregate data from the solve loop to improve overall knowledge health. Patterns in search failures reveal gaps where content is needed. Patterns in article usage identify high-value content worth investing in. Patterns in corrections identify quality issues requiring attention.

KCS changes the economics of knowledge management. Traditional approaches treat documentation as overhead that competes with resolution work. KCS makes documentation integral to resolution, capturing knowledge as a byproduct of work rather than as additional work.

The implementation challenge is behavioural. Analysts must develop the habit of searching before solving and creating while solving. This requires management commitment, metric alignment, and sustained reinforcement until the behaviour becomes automatic.

Search and Findability

A knowledge base delivers value only when users find relevant content. Findability depends on search technology, content quality, and user behaviour.

Search technology varies from basic keyword matching to natural language processing with semantic understanding. At minimum, search must support multi-word queries, handle common misspellings, and rank results by relevance. Advanced capabilities include synonym expansion, faceted filtering, and machine learning based on usage patterns.

The search interface matters as much as the algorithm. Users must be able to search from wherever they encounter problems. This means embedded search in service desk tools, self-service portals, intranet sites, and potentially chat platforms. Requiring users to navigate to a separate knowledge system before searching creates friction that reduces adoption.

Content quality for findability means writing in the user’s language rather than technical jargon. The article about resetting passwords must use the word “password” prominently, not just “credentials” or “authentication”. Including the error messages users see makes articles findable via those exact messages.

User behaviour reveals findability problems. Search analytics show what users search for, which results they click, and whether they return to search again after viewing an article. A search term that yields no results indicates a gap. A search result that users view but then return to search indicates a mismatch between title and content. High “search refinement” rates indicate poor initial results.

Search effectiveness can be measured. Zero-result rate tracks searches that return no articles. Rates above 15% indicate significant gaps or terminology mismatches. First-click resolution tracks whether users find helpful content on the first result they click. Rates below 60% indicate relevance ranking problems. Search-to-ticket rate tracks how often searches are followed by ticket creation, indicating that available knowledge did not resolve the issue.

Quality and Governance

Knowledge quality degrades without active governance. Systems change, procedures evolve, and articles become outdated. Staff leave, and their articles may contain assumptions only they understood. Governance establishes the structures and activities that maintain quality over time.

Review cycles ensure periodic validation. Every article has a review date appropriate to its content volatility. Articles about stable systems might have 12-month review cycles. Articles about rapidly evolving cloud services might have 3-month cycles. Automated notification prompts owners to review when dates approach.

Quality standards define what good looks like. Minimum standards might require: complete metadata, adherence to article type template, no broken links, no references to deprecated systems, and passing readability tests. These standards enable automated quality scanning that identifies articles needing attention.

Ownership assigns responsibility. Every article has an owner accountable for its accuracy and currency. Ownership typically follows the support tier responsible for that content area. When owners leave, ownership must transfer explicitly as part of offboarding.

Feedback mechanisms enable users to report problems. Every article should have a visible way to indicate “this didn’t help” or “this is incorrect”. Feedback creates a queue for review, with high-traffic articles receiving priority attention.

Governance structure for larger knowledge bases includes a knowledge manager role responsible for overall health, quality trends, and coordination. This role reviews metrics, identifies systemic issues, coordinates content initiatives, and ensures governance activities happen. For organisations with limited IT capacity, these responsibilities might be distributed among service desk leads rather than dedicated.

Metrics and Measurement

Knowledge management metrics demonstrate value and identify improvement opportunities. Metrics divide into content metrics measuring the knowledge base itself and impact metrics measuring effect on service delivery.

Content metrics:

MetricCalculationTargetInterpretation
Total articlesCount of published articlesContext-dependentBaseline size
Articles per categoryCount by taxonomy categoryEven distributionGaps in coverage
Article ageDays since last review<365 days for 90%Staleness risk
Flagged articlesCount with flagged status<5% of totalReview backlog
Articles without ownerCount missing owner field0Governance gap
Compliance rate% meeting quality standards>95%Quality health

Impact metrics:

MetricCalculationTargetInterpretation
First-contact resolution% resolved without escalation>70%Knowledge enabling resolution
Self-service resolution% resolved via self-service>30%Deflection success
Mean time to resolveAverage resolution timeDecreasing trendKnowledge accelerating resolution
Knowledge reuse rate% incidents with article linked>60%Integration with workflow
Search success rate% searches leading to article view>80%Findability
Article contribution rateNew articles per analyst per month>2Culture health

The relationship between content and impact metrics reveals whether the knowledge base is delivering value. A large knowledge base with low reuse rate indicates findability or relevance problems. High reuse rate with stagnant resolution times indicates that available knowledge is not enabling faster resolution, perhaps because articles lack sufficient detail.

Integration with Service Processes

Knowledge management achieves greatest impact when integrated with other service management processes rather than operating in isolation.

+------------------------------------------------------------------------+
| KNOWLEDGE INTEGRATION POINTS |
+------------------------------------------------------------------------+
| |
| +------------------+ +------------------+ |
| | | | | |
| | INCIDENT | | SELF-SERVICE | |
| | MANAGEMENT | | PORTAL | |
| | | | | |
| | - Search KB | | - Search KB | |
| | before | | before ticket | |
| | escalate | | - View how-to | |
| | - Link article | | and FAQ | |
| | to incident | | - Submit | |
| | - Create/update | | feedback | |
| | during | | | |
| | resolution | | | |
| +--------+---------+ +--------+---------+ |
| | | |
| +------------+---------------+ |
| | |
| v |
| +------------+-------------+ |
| | | |
| | KNOWLEDGE BASE | |
| | | |
| +------------+-------------+ |
| | |
| +------------+---------------+ |
| | | |
| v v |
| +--------+---------+ +--------+---------+ |
| | | | | |
| | PROBLEM | | CHANGE | |
| | MANAGEMENT | | MANAGEMENT | |
| | | | | |
| | - Create known | | - Flag articles | |
| | error | | for affected | |
| | articles | | CIs | |
| | - Document | | - Include KB | |
| | workarounds | | updates in | |
| | - Link problem | | change tasks | |
| | to articles | | | |
| +------------------+ +------------------+ |
| |
+------------------------------------------------------------------------+

Figure 4: Knowledge integration with incident, problem, change, and self-service processes

Incident management integration makes knowledge the first resource consulted during diagnosis. Service desk tools should present relevant articles based on incident categorisation and description. Analysts link the article used to resolve each incident, building usage data that improves relevance and demonstrates value. When resolution required knowledge not in the system, the incident closure workflow prompts article creation.

Problem management integration creates known error articles documenting root causes and workarounds for problems not yet permanently fixed. These articles prevent repeated diagnosis of the same issue and communicate interim solutions to affected users. When problems are resolved, associated articles are updated to reflect the permanent fix.

Change management integration flags articles potentially affected by changes. When a change request identifies configuration items, the system identifies articles related to those CIs and includes article review in change tasks. This mechanism keeps knowledge synchronised with system changes.

Self-service integration presents knowledge directly to users before they submit tickets. Search in the self-service portal returns relevant articles, deflecting contacts that users can resolve themselves. Effective self-service integration requires content written for non-technical users, separate from technical articles for IT staff.

Implementation Considerations

Organisations with Limited IT Capacity

Knowledge management at minimum viable scale requires: a shared location for articles (even a structured folder in existing document storage), a consistent template for article structure, and a commitment to capture knowledge during problem resolution.

Start with troubleshooting articles for the issues consuming most service desk time. Analyse recent incidents to identify the top 10 recurring issues and document each. These articles deliver immediate value by reducing repeated diagnosis.

Avoid elaborate taxonomies initially. Two levels of categorisation suffice: a handful of top-level categories matching major service areas and a flat list of articles within each. Taxonomy can evolve as the collection grows.

Review cycles for limited capacity might be reactive rather than scheduled. Flag articles when you discover they are outdated rather than reviewing proactively. This is imperfect but sustainable when dedicated time for knowledge governance is not available.

The single person IT function faces the challenge that knowledge lives primarily in their own head and capturing it competes with all other demands. One approach is committing to document any issue that takes more than 30 minutes to resolve or any issue you have seen more than twice. This threshold captures high-value knowledge without creating unsustainable documentation burden.

Organisations with Established IT Functions

Larger teams can implement fuller KCS methodology with dedicated knowledge management attention. Key investments include: ITSM tooling with integrated knowledge management, search tuned to organisational vocabulary, analytics revealing gaps and quality issues, and governance processes with clear ownership.

Assign knowledge domain owners by service area who are accountable for content coverage and quality within their domain. These need not be dedicated roles; service desk team leads or application support specialists can incorporate domain ownership into existing responsibilities.

Establish knowledge contribution as an expected part of analyst performance. Metrics like articles created per month and article quality scores become part of regular assessment. Recognition for high-quality contributions reinforces the desired behaviour.

Invest in search tuning. Review search analytics monthly to identify terms producing poor results. Add synonyms, adjust relevance weights, and create content to fill gaps. The difference between adequate and excellent findability often lies in this ongoing tuning work.

Field and Distributed Operations

Distributed organisations face specific knowledge challenges. Field offices may have intermittent connectivity, limiting real-time access to centralised knowledge bases. Staff in different locations encounter context-specific variations of problems, and solutions that work at headquarters may not apply in field conditions.

For connectivity-constrained environments, implement offline-capable knowledge access. This might mean downloadable PDF versions of high-use articles, mobile applications with local caching, or periodic synchronisation of the knowledge base subset relevant to each location.

Capture field-specific knowledge explicitly. When a field office develops a workaround for local conditions, that knowledge should enter the knowledge base tagged for the relevant context. Other field offices in similar conditions can then benefit.

Time zone distribution affects knowledge governance. Scheduled reviews should account for owner locations. Real-time collaboration on knowledge may not be practical, so asynchronous review workflows become important.

Technology Options

Knowledge management tools range from simple wiki platforms to integrated ITSM modules with advanced search and analytics.

Wiki platforms like MediaWiki, BookStack, or Wiki.js provide structured content management with search, versioning, and access control. These suit organisations seeking dedicated knowledge bases without ITSM integration. Deployment is self-hosted, maintaining data control. The trade-off is that knowledge is not embedded in service desk workflows, requiring manual effort to maintain integration.

ITSM platforms like Freshservice, Zendesk, or ServiceNow include integrated knowledge management modules. Knowledge becomes part of the incident workflow, with search embedded in agent interfaces and automatic article suggestions based on incident fields. Integration reduces friction but creates vendor dependency and typically involves subscription costs.

Document platforms with knowledge capability like Notion, Confluence, or SharePoint can function as knowledge bases with appropriate structure and governance. These suit organisations already using these platforms for other purposes. The risk is that knowledge content intermingles with other documents, degrading findability.

Platform TypeStrengthsConsiderationsExamples
Wiki (open source)Full control, no licensing cost, customisableRequires self-hosting, separate from ITSMMediaWiki, BookStack, Wiki.js
Wiki (commercial)Managed hosting, supportSubscription cost, less customisableNotion, Slite
ITSM integratedWorkflow embedding, unified platformVendor lock-in, costFreshservice, ServiceNow
Document platformFamiliar tools, may already existNot purpose-built, findability challengesSharePoint, Confluence

The selection depends on existing platforms, integration requirements, and operational capacity. Organisations already invested in an ITSM platform should use its knowledge module to gain integration benefits. Organisations seeking data control and low cost should consider self-hosted wikis. Organisations prioritising simplicity over features should use whatever platform their staff already knows.

Knowledge Base Article Template

The following template provides structure for the most common article type, troubleshooting articles addressing specific problems. Adapt the structure for how-to, FAQ, and reference articles as appropriate.


Article metadata

  • Title: [Descriptive title using user terminology]
  • Article ID: [System-generated unique identifier]
  • Category: [Primary taxonomy category] > [Secondary category]
  • Article type: [Troubleshooting / How-to / FAQ / Reference]
  • Audience: [IT staff only / All users / Specific group]
  • Author: [Name]
  • Created: [Date]
  • Last reviewed: [Date]
  • Next review: [Date]
  • Owner: [Name/role]
  • Related CIs: [Links to CMDB entries]
  • Status: [Draft / Published / Flagged / Archived]
  • Keywords: [Search terms, synonyms, error codes]

Symptom

[Describe what the user experiences in their terms. Include exact error messages if applicable. Be specific enough that someone experiencing this problem recognises it.]


Cause

[Explain why this happens. Include enough technical detail for diagnosis but write for your audience. For IT staff articles, include technical specifics. For self-service articles, keep explanation accessible.]


Resolution

[Provide steps to resolve the issue. Number steps sequentially. Include expected results at key points so users can verify they are on track. For multiple resolution paths, present the most common first.]

  1. [First step with specific action and expected result]
  2. [Second step…]
  3. [Continue until resolved…]

Verification

[How to confirm the issue is resolved. What should the user observe or test?]


Related articles

  • [Link to related content]
  • [Link to related content]

Notes

[Internal notes not visible to users: workarounds in progress, known issues, escalation contacts if resolution fails.]


Revision history

DateAuthorChanges
[Date][Name][Summary of changes]

See also