Knowledge Management
Knowledge management encompasses the systems, practices, and governance structures that enable organisations to capture, organise, preserve, and share what they collectively know. Unlike document management, which concerns the storage and retrieval of files, knowledge management addresses the substance contained within those files and the expertise that exists only in people’s heads. An organisation with excellent document management can locate any file within seconds yet still lose critical institutional knowledge when experienced staff depart. Knowledge management closes this gap by making implicit understanding explicit and ensuring that hard-won lessons inform future decisions.
- Explicit knowledge
- Knowledge that can be articulated, codified, and stored in documents, databases, or systems. Procedures, reports, and technical specifications represent explicit knowledge. Once captured, explicit knowledge transfers without requiring direct interaction with its originator.
- Tacit knowledge
- Knowledge embedded in experience, intuition, and context that resists easy articulation. Understanding which donors respond to which messaging approaches, knowing how to navigate a particular government ministry, or recognising early warning signs of community tension represent tacit knowledge. Tacit knowledge transfers through observation, mentorship, and sustained interaction.
- Knowledge base
- A structured repository of explicit knowledge organised for retrieval. Knowledge bases contain articles, guides, and reference material with consistent formatting and navigation. Unlike file storage, knowledge bases present information rather than merely storing it.
- Wiki
- A collaborative knowledge system where users create and edit interlinked pages. Wikis enable distributed authorship and organic growth of knowledge structures. The linking structure creates navigable webs of related information.
- Institutional memory
- The accumulated knowledge, experience, and context that enables an organisation to function effectively. Institutional memory includes formal documentation and informal understanding of why things work the way they do, what has been tried before, and what lessons previous efforts yielded.
Knowledge Management and Document Management
The distinction between knowledge management and document management reflects the difference between information containers and information substance. Document management systems store, version, and control access to files. Knowledge management systems organise, connect, and present what those files contain. A document management system answers “where is the file?” while a knowledge management system answers “what do we know about this topic?”
Consider a humanitarian organisation’s response to a flood. The document management system contains the assessment report (a 47-page PDF), the response plan (a Word document), the lessons learned workshop notes (a PowerPoint), the budget (an Excel file), and 340 photographs. Locating any of these files requires knowing they exist and searching by filename, date, or folder location. A knowledge management system transforms this collection into structured knowledge: an article on flood response in that region that synthesises the assessment findings, links to the response plan with annotations on what worked, incorporates the key lessons learned as actionable guidance, and connects to related knowledge on similar responses elsewhere. Staff preparing for future floods find relevant knowledge without knowing which specific documents exist.
+--------------------------------------------------------------------+| INFORMATION CONTINUUM |+--------------------------------------------------------------------+| || RAW DATA DOCUMENTS INFORMATION KNOWLEDGE || || +----------+ +----------+ +----------+ +---------+ || |Sensor | |Reports | |Structured| |Decision | || |readings +----->|Plans +----->|articles +----->|guidance | || |Form data | |Emails | |Guides | |Expertise| || |Logs | |Spreadshts| |FAQs | |Insight | || +----------+ +----------+ +----------+ +---------+ || || FILE SYSTEMS DOCUMENT MGMT KNOWLEDGE BASES EXPERTISE || DATABASES SYSTEMS WIKIS NETWORKS || || <-------- Increasing structure, curation, and value ---------> || |+--------------------------------------------------------------------+Figure 1: The information continuum from raw data to actionable knowledge
The practical implication is that organisations need both systems serving different purposes. Document management provides the foundation of controlled storage. Knowledge management builds on this foundation to create navigable, curated, synthesised content that staff can use without extensive searching or prior knowledge of what documents exist.
Knowledge Architecture
Knowledge architecture determines how information is organised, categorised, and connected within knowledge systems. A well-designed architecture enables users to find relevant knowledge through multiple paths: browsing a logical hierarchy, searching by keyword, following links between related topics, or filtering by attributes like region, programme type, or date. Poor architecture creates knowledge graveyards where valuable content exists but remains unfindable.
The foundation of knowledge architecture is taxonomy, a hierarchical classification scheme that organises knowledge into categories and subcategories. A taxonomy for a development organisation might include top-level categories for programme sectors (health, education, livelihoods, protection), organisational functions (finance, HR, IT, communications), geographic regions, and cross-cutting themes (gender, disability inclusion, climate adaptation). Each category subdivides into more specific topics. The health category might contain subcategories for maternal health, infectious disease, nutrition, and health systems strengthening. Each subcategory contains the knowledge articles relevant to that topic.
+------------------------------------------------------------------+| TAXONOMY STRUCTURE |+------------------------------------------------------------------+| || +------------------+ || | ALL KNOWLEDGE | || +--------+---------+ || | || +---------------------+----------------------+ || | | | | || v v v v || +---------+ +--------+ +----------+ +-----------+ || |PROGRAMME| |FUNCTION| |GEOGRAPHY | |CROSS-CUT | || |SECTORS | | | | | |THEMES | || +----+----+ +----+---+ +----+-----+ +-----+-----+ || | | | | || +---+---+ +---+---+ +----+----+ +-----+-----+ || | | | | | | | | || v v v v v v v v || Health WASH Finance IT East West Gender Climate || | | Africa Africa | || | | | | || v v v v || Maternal Budget Kenya GBV || Nutrition Grants Uganda Inclusion || MNCH Audit Ethiopia Analysis || |+------------------------------------------------------------------+Figure 2: Hierarchical taxonomy with multiple classification dimensions
Taxonomy alone proves insufficient because knowledge frequently spans multiple categories. An article on nutrition-sensitive agriculture for women in Kenya belongs to nutrition, livelihoods, gender, and East Africa. Tagging supplements taxonomy by allowing multiple labels on each knowledge item. The article receives tags for all relevant categories, appearing in search results and browse views for any of them. Tags also capture attributes that taxonomy does not address: the article might be tagged as “case study,” “field-tested,” and “2024” to indicate its type, validation status, and currency.
Linking creates the third organisational dimension. Knowledge items connect to related items through explicit hyperlinks. The nutrition article links to the Kenya country strategy, the gender mainstreaming guide, the agricultural extension training materials, and previous nutrition assessments. These links create navigable paths through the knowledge base. A user who finds one relevant article discovers related content through links rather than additional searches.
+------------------------------------------------------------------+| KNOWLEDGE GRAPH |+------------------------------------------------------------------+| || +-----------+ +-----------+ +-----------+ || | Nutrition |--------->| Livelihood|-------->| Economic | || | Guide | | Programme | | Analysis | || +-----+-----+ +-----+-----+ +-----------+ || | | || | +----------------+ || | | || v v || +-----+-----+-----+ +-----------+ || | Agriculture |--------->| Training | || | for Women | | Materials | || | (Kenya) | +-----------+ || +-----+-----+-----+ || | | || | +-------------------+ || v v || +-----------+ +-----------+ || | Gender | | Kenya | || | Guide | | Strategy | || +-----------+ +-----------+ || || Nodes: Knowledge items || Edges: Explicit links (related to, builds on, references) || |+------------------------------------------------------------------+Figure 3: Knowledge graph showing relationships between knowledge items
Effective knowledge architecture balances structure with flexibility. Too little structure creates chaos where nothing is findable. Too much structure creates rigidity where users cannot classify items that do not fit neatly or must duplicate content across categories. The optimal architecture provides clear primary categories while allowing extensive cross-referencing through tags and links.
Platform Types
Knowledge management platforms range from simple wiki software to comprehensive knowledge management suites. The appropriate choice depends on organisational scale, technical capacity, integration requirements, and the nature of knowledge being managed.
Wikis represent the most accessible entry point. Wiki software enables any authorised user to create and edit pages, organise content into hierarchical structures, and link between pages. The collaborative editing model distributes content creation across the organisation rather than concentrating it in a dedicated team. MediaWiki, the software powering Wikipedia, operates as mature open-source wiki software with extensive features for large-scale knowledge bases. BookStack provides a more modern interface organised around books, chapters, and pages rather than flat wiki pages. Wiki.js offers a contemporary editing experience with Markdown support and integration with Git for version control.
+------------------------------------------------------------------+| PLATFORM COMPARISON |+------------------------------------------------------------------+| || +---------------------------+ +---------------------------+ || | WIKI | | KNOWLEDGE BASE | || +---------------------------+ +---------------------------+ || | | | | || | - Flat or book structure | | - Article-centric | || | - Collaborative editing | | - Editorial workflow | || | - Organic growth | | - Structured templates | || | - Interlinked pages | | - Search optimised | || | - Minimal workflow | | - Access analytics | || | | | | || | Best for: | | Best for: | || | Internal documentation | | Curated guidance | || | Technical reference | | External audiences | || | Collaborative authoring | | Controlled publishing | || | | | | || | Examples: | | Examples: | || | MediaWiki, BookStack, | | Confluence, Notion, | || | Wiki.js, DokuWiki | | Document360, Guru | || +---------------------------+ +---------------------------+ || || +---------------------------+ +---------------------------+ || | INTEGRATED SUITE | | HYBRID APPROACH | || +---------------------------+ +---------------------------+ || | | | | || | - Part of larger platform | | - Multiple tools | || | - Tight integration | | - Federated search | || | - Unified permissions | | - Specialised purposes | || | - Vendor lock-in | | - Integration overhead | || | | | | || | Best for: | | Best for: | || | Existing platform users | | Complex requirements | || | Minimal admin overhead | | Multiple audiences | || | | | | || | Examples: | | Examples: | || | SharePoint, Notion, | | Wiki + KB + Search | || | Google Sites | | engine combination | || +---------------------------+ +---------------------------+ || |+------------------------------------------------------------------+Figure 4: Knowledge management platform types and their characteristics
Knowledge base platforms provide more structure than wikis. Rather than freeform pages, knowledge bases organise content into articles with consistent templates, metadata, and publishing workflows. This structure suits curated guidance content where quality control matters more than collaborative authoring speed. Confluence from Atlassian combines wiki and knowledge base characteristics with strong integration into Atlassian’s project management tools. Notion provides flexible databases that can model various knowledge structures. Document360 focuses specifically on knowledge base use cases with strong search and analytics.
Integrated suites embed knowledge management within broader collaboration platforms. Microsoft 365 organisations can use SharePoint as a knowledge repository with integration into Teams, search, and the broader Microsoft ecosystem. Google Workspace organisations can use Google Sites with integration into Drive and Google Search. These integrated approaches reduce administrative overhead but create vendor dependency and limit flexibility.
The hybrid approach combines multiple specialised tools with federated search. An organisation might maintain a wiki for technical documentation, a knowledge base for programme guidance, and a lessons learned database for project retrospectives. A search layer indexes all three systems, enabling users to find knowledge regardless of where it resides. This approach maximises flexibility but requires investment in integration and consistent metadata across systems.
Open Source Platform Options
| Platform | Architecture | Strengths | Considerations |
|---|---|---|---|
| MediaWiki | PHP, MySQL/MariaDB | Extremely mature, extensive extensions, proven at Wikipedia scale | Complex administration, dated interface without extensions |
| BookStack | PHP, MySQL/MariaDB | Modern interface, book/chapter/page structure, WYSIWYG editing | Smaller community than MediaWiki |
| Wiki.js | Node.js, PostgreSQL | Modern stack, Git integration, multiple editors, GraphQL API | Requires Node.js expertise |
| DokuWiki | PHP, flat files | No database required, simple deployment, good for small teams | Limited scalability, basic features |
| XWiki | Java | Enterprise features, structured data, extensive API | Java stack complexity, resource intensive |
Commercial Platform Options
| Platform | Nonprofit Programme | Jurisdictional Notes |
|---|---|---|
| Confluence | Atlassian Community licence (free for eligible nonprofits) | Australian company, data in various regions |
| Notion | 50% discount for nonprofits | US company, data primarily US |
| SharePoint | Microsoft 365 nonprofit licensing | US company, CLOUD Act applies, regional data centres available |
| Guru | Nonprofit pricing available | US company |
| Document360 | Contact for nonprofit pricing | Indian company, data in multiple regions |
Knowledge Lifecycle
Knowledge moves through distinct phases from creation to eventual retirement. Understanding this lifecycle enables organisations to design systems and practices that support knowledge at each stage rather than focusing only on initial capture.
+------------------------------------------------------------------+| KNOWLEDGE LIFECYCLE |+------------------------------------------------------------------+| || CREATE CURATE USE MAINTAIN || | | | | || v v v v || +--------+ +--------+ +--------+ +--------+ || |Capture | |Review | |Search | |Update | || |from +----->|Organise+---->|Browse +------>|Validate| || |sources | |Enhance | |Apply | |Archive | || +--------+ +--------+ +--------+ +--------+ || | | | | || v v v v || Documents Taxonomy Decisions Currency || Expertise Links Actions Accuracy || Lessons Metadata Learning Relevance || || RETIRE || | || v || +--------+ || |Archive | || |Delete | || |Migrate | || +--------+ || |+------------------------------------------------------------------+Figure 5: Knowledge lifecycle phases from creation through retirement
The creation phase captures knowledge from its sources. These sources include documents produced through normal work (reports, plans, evaluations), structured capture activities (lessons learned workshops, after-action reviews, exit interviews), and extraction from tacit knowledge (expert interviews, process documentation, decision recording). Creation quality determines downstream value. Poorly written or organised content requires extensive curation effort or remains unfindable despite existing in the system.
The curation phase transforms raw captured knowledge into usable, findable content. Curation activities include editorial review for clarity and completeness, classification within the taxonomy, tagging with relevant metadata, linking to related knowledge items, and enhancement with context that aids understanding. In organisations with dedicated knowledge management roles, curation represents a primary responsibility. In organisations without dedicated roles, curation occurs through peer review and editorial guidelines that authors follow.
The use phase is where knowledge creates value. Users find knowledge through search, browse the taxonomy, follow links from related content, or receive recommendations. They apply knowledge to inform decisions, guide actions, avoid repeating mistakes, and build on previous work. Use patterns provide feedback on knowledge quality and organisation. Frequently accessed content demonstrates value. Content that users find but immediately leave suggests poor quality or misleading metadata. Content never found despite relevance indicates discoverability problems.
The maintenance phase ensures knowledge remains accurate and relevant over time. Knowledge degrades as circumstances change, policies update, staff depart, and better approaches emerge. Maintenance activities include scheduled reviews (annual reviews for stable content, quarterly for rapidly changing domains), triggered updates when referenced source material changes, validation that guidance remains current, and consolidation of fragmented content into comprehensive articles. Without active maintenance, knowledge bases accumulate outdated content that undermines trust in the entire system.
The retirement phase removes knowledge that no longer serves users. Retirement decisions consider whether content is superseded by newer material, no longer relevant to current operations, inaccurate with no prospect of correction, or legally required to be retained. Retirement options include archiving (removing from active navigation but preserving for reference), deletion (permanent removal), or migration (moving to a different system such as a historical archive).
Knowledge Capture Practices
Capturing knowledge requires deliberate practices integrated into normal work. Knowledge capture fails when treated as an additional burden separate from operational activities. Effective capture embeds knowledge documentation into existing workflows where the marginal effort is low and the connection to source events remains fresh.
After-action reviews capture lessons immediately following significant activities. A 30-minute structured discussion after a distribution, assessment, or workshop surfaces what worked, what did not work, and what should change next time. The discussion follows a consistent format: intended outcomes, actual outcomes, reasons for differences, and recommended actions. A designated note-taker documents key points. The notes transform into knowledge base content through light curation rather than extensive rewriting.
Exit interviews capture departing staff knowledge. When staff with significant tenure or specialised expertise leave, structured interviews extract critical knowledge before departure. The interview covers current responsibilities, key relationships, ongoing issues, and advice for successors. For highly specialised roles, extended handover periods enable observation and shadowing that transfers tacit knowledge. Exit interview findings feed into role documentation and relevant knowledge base articles.
Project retrospectives capture lessons at project milestones and completion. Unlike after-action reviews focused on specific events, retrospectives examine entire project phases or complete projects. They address what the project learned about the context, what approaches proved effective, what challenges arose and how the team addressed them, and what recommendations apply to future similar projects. Retrospective findings link to project documentation and contribute to organisational guidance on similar project types.
Decision records capture the reasoning behind significant decisions. When the organisation makes important choices (selecting a vendor, changing a policy, entering a new geographic area), a decision record documents what options were considered, what criteria applied, what evidence informed the decision, and why the chosen option prevailed. Decision records enable future staff to understand not just what was decided but why, preventing repetition of analysis already performed and enabling better decisions when circumstances change.
Expert documentation sessions capture tacit knowledge from subject matter experts. When an individual holds critical knowledge not documented elsewhere, dedicated sessions extract and codify that knowledge. The expert explains their domain while an interviewer asks clarifying questions and a documenter captures content. Multiple sessions over weeks prove more effective than marathon single sessions. The resulting documentation receives expert review before publication.
A worked example illustrates capture integration. An organisation runs a cash transfer programme in three countries. After each monthly distribution cycle, field teams conduct 20-minute after-action reviews documenting any issues with beneficiary verification, payment delivery, or post-distribution complaints. These notes feed into a monthly programme learning summary that the programme manager compiles. Quarterly, the programme team conducts a retrospective examining trends across sites. The retrospective produces updated guidance documents and contributes to an annual lessons learned report. When the programme manager departs after three years, exit interviews capture relationship knowledge (which donors respond to what, how to navigate partner organisations) and operational wisdom (common failure modes, seasonal patterns, budget timing). This capture system produces ongoing documentation with minimal additional effort because capture activities integrate into existing programme management routines.
Search and Discovery
Users must find knowledge for it to create value. Discovery depends on search capability, navigation structures, and recommendation systems that surface relevant content. Many knowledge management initiatives fail not because knowledge is absent but because users cannot find what exists.
Search provides direct access when users know what they seek. Effective search requires full-text indexing of knowledge content, recognition of synonyms and related terms, ranking that surfaces most relevant results, filtering by attributes (date, category, content type), and highlighting of search terms in results. Search quality depends on content quality. Well-written articles with clear titles, descriptive summaries, and consistent terminology rank higher and surface for appropriate queries than poorly written content with vague titles and inconsistent language.
Search configuration affects results significantly. A humanitarian organisation might configure search to recognise that “CTP” and “cash transfer programming” and “cash assistance” refer to the same concept. It might boost recent content in rankings since newer guidance supersedes older material. It might enable filtering by region so that Kenya staff can focus results on East Africa. These configuration choices require ongoing tuning based on search analytics showing what users seek and whether they find it.
Navigation provides access when users want to explore rather than search for specific items. The taxonomy creates a browsable hierarchy. Users can navigate from top-level categories down to specific topics, discovering content along the way. Navigation serves users who do not know exactly what they need, users learning about new domains, and users seeking comprehensive coverage of a topic rather than a single article.
Navigation design balances depth against breadth. A taxonomy with two levels but 50 categories at each level overwhelms users with choices. A taxonomy with six levels but three categories at each level requires excessive clicking to reach content. Most knowledge bases perform well with three to four levels of hierarchy and five to fifteen categories at each level, producing manageable navigation with reasonable depth.
Recommendations surface relevant content that users did not explicitly request. Recommendation approaches include displaying related articles based on shared tags or links, showing popular content in relevant categories, highlighting recently updated material, and suggesting content based on user role or previous reading. Recommendations help users discover valuable knowledge they would not have searched for and keep knowledge bases feeling active and current.
Federated search enables discovery across multiple systems. When knowledge resides in a wiki, a document management system, and an intranet, users should not need to search each separately. Federated search indexes content from all sources and presents unified results. Implementation requires consistent metadata across systems and careful relevance ranking to prevent one system from dominating results.
Tacit Knowledge and Expertise Location
Explicit knowledge captured in documents and knowledge bases represents only part of organisational knowledge. Tacit knowledge embedded in experienced staff provides equal or greater value but resists documentation. Effective knowledge management addresses tacit knowledge through expertise location systems that connect knowledge seekers with knowledge holders.
Expertise directories document who knows what. Rather than attempting to extract all tacit knowledge into documents, expertise directories enable users to find people with relevant knowledge. Directory entries include areas of expertise, project experience, languages, geographic experience, and willingness to be contacted. Users searching for knowledge about nutrition programming in francophone Africa find experts they can contact rather than only finding documented content.
Maintaining expertise directories requires ongoing effort. Self-reported expertise becomes outdated as staff take new assignments and develop new skills. Automated approaches supplement self-reporting by inferring expertise from project assignments, authored documents, and organisational data. A staff member who spent two years on protection programming, authored three protection guidance documents, and completed protection training appears in directory results for protection queries regardless of whether they explicitly claimed protection expertise.
Communities of practice connect staff with shared professional interests. A community of practice for monitoring and evaluation specialists creates a venue for members to share experiences, ask questions, and develop collective expertise. Communities operate through regular meetings (virtual or in-person), discussion forums, shared resource libraries, and collaborative projects. The community develops shared understanding that supplements formal documentation and creates relationships that facilitate tacit knowledge transfer.
Mentorship and shadowing transfer tacit knowledge through sustained relationship. A new programme manager learns not just from written guidance but from observing and discussing with an experienced programme manager. Mentorship makes explicit the contextual judgment that documentation cannot capture: how to read a room during community meetings, when to push back on donor requirements, how to recognise early warning signs of staff burnout. Structured mentorship programmes pair staff deliberately rather than relying on informal relationships to develop.
Knowledge networks visualise connections between people and topics. Network analysis of project assignments, co-authorship, and collaboration patterns reveals who works with whom and who spans different domains. These networks identify key connectors who bridge different parts of the organisation, potential single points of failure where knowledge concentrates in one person, and clusters that might benefit from more cross-connection.
Knowledge Transfer During Transitions
Staff transitions create acute knowledge management challenges. When experienced staff depart, retire, or move to new roles, their accumulated knowledge departs with them unless deliberately transferred. Transition planning should begin before departure dates arise, but the reality is that transitions frequently occur rapidly with limited handover time.
The transfer process scales with role criticality and departure timeline. When a programme director with 15 years of experience announces retirement in six months, extensive transfer activities become feasible: documented handover materials, successor shadowing, recorded video explanations, multiple exit interviews, and facilitated introductions to key relationships. When a programme officer with two years of experience resigns with two weeks’ notice, transfer activities compress to essential documentation updates, a single exit interview, and written notes on active matters.
+------------------------------------------------------------------+| KNOWLEDGE TRANSFER MATRIX |+------------------------------------------------------------------+| || SHORT TIMELINE LONG TIMELINE || (< 4 weeks) (> 8 weeks) || || HIGH +------------------+ +------------------+ || KNOWLEDGE | Priority capture | | Comprehensive | || RISK | - Critical docs | | transfer | || | - Key contacts | | - Shadowing | || | - Active issues | | - Documentation | || | - Exit interview | | - Introductions | || | | | - Multiple exits | || | 10-15 hours | | 40-60 hours | || +------------------+ +------------------+ || || LOW +------------------+ +------------------+ || KNOWLEDGE | Standard | | Normal | || RISK | offboarding | | handover | || | - Role docs | | - Role docs | || | - Active items | | - Overlap period | || | | | - Exit interview | || | 3-5 hours | | 8-12 hours | || +------------------+ +------------------+ || |+------------------------------------------------------------------+Figure 6: Knowledge transfer approach based on timeline and knowledge risk
Knowledge risk assessment determines transfer intensity. Factors increasing knowledge risk include: length of tenure (more years means more accumulated knowledge), role specialisation (unique roles without peers), relationship centrality (staff who connect otherwise separate groups), documentation gaps (roles with sparse existing documentation), and strategic importance (roles affecting major programmes or decisions). A 10-year veteran in a unique technical advisor role with extensive external relationships and minimal documentation presents extreme knowledge risk. A one-year programme assistant in a role with three peers and standardised procedures presents minimal knowledge risk.
Transfer activities address different knowledge types. Explicit knowledge transfers through documentation updates, procedure reviews, and file organisation. Tacit knowledge transfers through conversation, demonstration, and relationship introduction. Relationship knowledge transfers through facilitated introductions where the departing staff member explicitly endorses their successor to key contacts.
The handover document structures transfer activities and captures results. Core sections include: current responsibilities and their status, key relationships with context on each, active issues requiring attention, advice for the successor, and location of relevant files and systems. The handover document becomes a permanent knowledge asset that future role holders can reference.
Measuring Knowledge Management Effectiveness
Knowledge management requires investment in systems, content creation, and curation. Demonstrating value to justify this investment requires measurement of both activity and outcomes. Pure activity metrics (articles created, searches performed) indicate system usage but not value creation. Outcome metrics (decisions improved, errors avoided, time saved) indicate value but prove difficult to attribute directly to knowledge management. Effective measurement combines both types.
Activity metrics track system engagement:
- Content volume: total articles, articles created per month, articles updated per month
- Search activity: searches performed, search-to-click ratio, failed searches (no results or no clicks)
- Navigation activity: page views, browse paths, time on page
- Contribution activity: contributors per month, contributions per contributor
- Maintenance activity: articles reviewed, articles archived, update frequency
Quality metrics assess content and system health:
- Currency: percentage of articles reviewed within target period (e.g., 80% reviewed within 12 months)
- Coverage: knowledge gaps identified through user requests or search failures
- Accuracy: errors reported, corrections made
- Findability: average searches to find relevant content, percentage of searches yielding useful results
- User satisfaction: ratings on articles, survey results
Outcome metrics connect knowledge management to organisational value:
- Time saved: self-reported time savings from finding existing knowledge rather than recreating
- Errors avoided: incidents where documented lessons prevented repeating mistakes
- Decision quality: qualitative assessment of how knowledge informed significant decisions
- Onboarding efficiency: time for new staff to reach productivity, attributed to knowledge availability
- Knowledge retention: institutional knowledge preserved through transitions
A worked example demonstrates measurement application. An organisation with 500 staff operates a knowledge base with 1,200 articles. Monthly activity metrics show 3,400 searches, 8,200 page views, 45 new articles, and 120 article updates from 35 contributors. Quality metrics show 72% of articles reviewed within 12 months (below 80% target, flagging maintenance backlog), 15 failed searches logged (indicating content gaps for follow-up), and average article rating of 4.1 out of 5 stars. An annual survey asks staff whether they found useful knowledge in the past month (78% yes), whether they saved time by finding existing rather than creating new content (65% yes), and their estimated hours saved (median 2 hours per month). Extrapolating the time savings: 500 staff × 65% × 2 hours × 12 months = 7,800 hours per year. At an average cost of £30 per hour, this represents £234,000 in staff time redirected from recreating knowledge to other activities. This return justifies the knowledge management investment of approximately £80,000 annually (system costs plus portion of staff time for content creation and curation).
Implementation Considerations
For Organisations with Limited IT Capacity
Knowledge management need not require dedicated systems or specialised staff. The minimum viable approach uses existing tools configured for knowledge purposes. If the organisation uses SharePoint, a dedicated knowledge site within SharePoint provides basic wiki and knowledge base functionality. If the organisation uses Google Workspace, Google Sites creates simple knowledge pages with integration to Drive documents. If the organisation uses neither, free tiers of wiki platforms (BookStack, Wiki.js) or simple solutions like Notion’s free plan provide starting points.
Start with high-value knowledge that staff repeatedly seek. Common starting points include onboarding materials for new staff, procedures for frequent tasks, contact directories, and lessons learned from recent projects. Creating 20-30 foundational articles provides immediate value while establishing patterns for future content. Assign knowledge management responsibilities to existing roles rather than creating new positions. A programme manager might own programme-related knowledge, an IT coordinator might own technical knowledge, and administrative staff might own operational knowledge.
Avoid elaborate taxonomy design initially. A simple three-level structure (category > subcategory > article) suffices until content volume demands more organisation. Start with five to eight top-level categories aligned with how staff think about the organisation. Add subcategories as article counts in any category exceed 15-20 items. Refine taxonomy based on actual usage patterns rather than theoretical completeness.
For Organisations with Established IT Functions
Organisations with dedicated IT capacity can implement comprehensive knowledge management systems with integration, governance, and measurement. Platform selection should consider integration requirements with existing systems. If the organisation already uses Atlassian products, Confluence integrates with Jira, Trello, and Bitbucket. If the organisation uses Microsoft 365 extensively, SharePoint knowledge bases integrate with Teams, search, and security models. If integration matters less than feature flexibility, standalone platforms like BookStack or Notion provide independence from existing ecosystems.
Establish governance structures including ownership model (who owns which knowledge domains), editorial standards (templates, style guide, quality criteria), review cycles (how often different content types require review), and contribution expectations (who creates and updates content). Governance prevents the accumulation of outdated or inconsistent content that undermines trust in the knowledge base.
Implement federated search if knowledge resides in multiple systems. Users should find relevant content regardless of whether it lives in the wiki, the document management system, SharePoint sites, or other repositories. Search federation requires consistent metadata, relevance tuning, and ongoing attention to search quality.
Field and Offline Considerations
Field staff frequently work with intermittent or no connectivity. Knowledge systems must accommodate offline access for knowledge to serve field operations. Wiki and knowledge base platforms vary in offline capability. Some platforms (Notion, certain wiki configurations) support offline access through mobile applications that synchronise when connectivity returns. Others require constant connectivity and provide no offline access.
For environments with reliable power but intermittent connectivity, local caching strategies help. Mobile applications that download content for offline reading enable access during connectivity gaps. For environments with severe connectivity constraints, exported documentation (PDF compilations, offline HTML archives) distributed periodically provides static but accessible knowledge.
Field-specific knowledge requires field input to remain relevant. Knowledge created at headquarters about field operations degrades quickly without field validation and updates. Governance structures should include field representation in content review and clear channels for field staff to report inaccuracies or gaps.