Skip to main content

Analytics Strategy

Analytics strategy defines how an organisation transforms data into actionable insight to support its mission. The strategy encompasses what questions analytics answers, who performs analysis, what capabilities exist, and how analytics outputs inform decisions. For mission-driven organisations, analytics serves three primary purposes: demonstrating programme outcomes to stakeholders, improving operational efficiency, and satisfying donor reporting requirements.

Analytics
The systematic examination of data to discover patterns, draw conclusions, and support decision-making. Analytics ranges from simple aggregation to complex statistical modelling.
Analytics maturity
The progression of analytical capability from basic reporting through predictive and prescriptive analysis. Higher maturity requires greater investment in skills, technology, and data infrastructure.
Self-service analytics
A model where business users create their own analyses without requiring technical assistance for each request. Requires governed data access and appropriate tooling.
Data literacy
The ability to read, understand, create, and communicate data as information. Encompasses both technical skills and critical interpretation of analytical outputs.
Analytical question
A specific inquiry that data can answer. Well-formed analytical questions specify the metric, dimensions, time period, and decision the answer informs.

Analytics maturity model

Organisations progress through distinct stages of analytical capability, each building on the previous and requiring additional investment in data infrastructure, skills, and governance.

+------------------------------------------------------------------+
| ANALYTICS MATURITY MODEL |
+------------------------------------------------------------------+
| |
| Stage 4: PRESCRIPTIVE |
| +------------------------------------------------------------+ |
| | "What should we do?" | |
| | Automated recommendations, optimisation, decision support | |
| | Requires: ML infrastructure, action integration, feedback | |
| +------------------------------------------------------------+ |
| ^ |
| | |
| Stage 3: PREDICTIVE |
| +------------------------------------------------------------+ |
| | "What will happen?" | |
| | Forecasting, risk scoring, propensity modelling | |
| | Requires: Statistical skills, clean historical data, MLOps | |
| +------------------------------------------------------------+ |
| ^ |
| | |
| Stage 2: DIAGNOSTIC |
| +------------------------------------------------------------+ |
| | "Why did it happen?" | |
| | Root cause analysis, drill-down, segmentation | |
| | Requires: Dimensional data, skilled analysts, governed data| |
| +------------------------------------------------------------+ |
| ^ |
| | |
| Stage 1: DESCRIPTIVE |
| +------------------------------------------------------------+ |
| | "What happened?" | |
| | Standard reports, dashboards, KPI tracking | |
| | Requires: Basic data warehouse, reporting tools, defined | |
| | metrics | |
| +------------------------------------------------------------+ |
| |
+------------------------------------------------------------------+

Figure 1: Analytics maturity stages showing progressive capability and requirements

Descriptive analytics answers historical questions: how many beneficiaries received services last quarter, what was the expenditure rate by project, which field offices submitted timely reports. Descriptive analytics requires defined metrics, consistent data collection, and basic aggregation capability. Most organisations operate primarily at this stage, and competence here forms the foundation for all higher stages. An organisation cannot diagnose why something happened without first reliably knowing what happened.

Diagnostic analytics investigates causation and variation: why did beneficiary satisfaction decrease in Region A, what factors correlate with project delays, which characteristics distinguish successful interventions. Diagnostic analytics requires dimensional data models that enable drill-down and segmentation, analysts skilled in exploratory analysis, and governed data access that permits ad-hoc investigation. The transition from descriptive to diagnostic represents the largest capability gap for most organisations because it requires both technical infrastructure (dimensional data) and human capability (analytical skills).

Predictive analytics forecasts future states: which projects are likely to exceed budget, what beneficiary volumes should be expected next quarter, which staff members show attrition risk. Predictive analytics requires clean historical data spanning sufficient time periods, statistical or machine learning skills, and infrastructure to develop and maintain models. Mission-driven organisations with stable, recurring programmes accumulate the historical data predictive models require; organisations responding to emergent crises lack this foundation.

Prescriptive analytics recommends actions: which intervention should be offered to this beneficiary, how should resources be allocated across field offices, what staffing level optimises service delivery. Prescriptive analytics requires integration with operational systems to deliver recommendations at decision points, feedback mechanisms to measure recommendation effectiveness, and governance frameworks to manage automated decision-making. Few mission-driven organisations operate at this stage, and the ethical considerations surrounding automated recommendations in humanitarian contexts require careful examination.

Most mission-driven organisations benefit from achieving strong descriptive capability with selective diagnostic capability for priority questions. Pursuing predictive or prescriptive analytics without solid descriptive foundations produces unreliable results and wastes resources.

Analytics for mission-driven organisations

Mission-driven organisations face distinctive analytical requirements that differ from commercial contexts. Three primary analytical domains demand attention: programme outcomes measurement, operational efficiency analysis, and donor reporting.

Programme outcomes analytics demonstrates whether interventions achieve intended results. This domain includes output tracking (services delivered, beneficiaries reached), outcome measurement (behaviour change, condition improvement), and impact assessment (long-term change attributable to the intervention). Programme analytics requires careful metric design because measuring the wrong things produces perverse incentives. An organisation measuring only beneficiaries reached may optimise for volume over quality; an organisation measuring only cost per beneficiary may underinvest in complex cases requiring intensive support.

Operational efficiency analytics examines resource utilisation and process performance. This domain includes financial analysis (burn rate, cost allocation, budget variance), human resources analysis (staff productivity, turnover patterns, capacity utilisation), and process analysis (cycle times, error rates, bottlenecks). Operational analytics often receives less attention than programme analytics but directly affects an organisation’s ability to deliver programmes sustainably.

Donor reporting analytics produces the specific metrics and narratives required by funding agreements. This domain includes indicator tracking against targets, compliance reporting, and impact demonstration for renewal applications. Donor requirements vary significantly: USAID requires specific indicator definitions and reporting formats, ECHO mandates particular frameworks, UN agencies expect alignment with cluster indicators. Organisations managing multiple funding sources must reconcile different indicator definitions and reporting periods.

+--------------------------------------------------------------------+
| ANALYTICS DOMAINS FOR NGOs |
+--------------------------------------------------------------------+
| |
| +------------------+ +------------------+ +------------------+ |
| | PROGRAMME | | OPERATIONAL | | DONOR | |
| | OUTCOMES | | EFFICIENCY | | REPORTING | |
| +------------------+ +------------------+ +------------------+ |
| | | | | | | |
| | Outputs | | Financial | | Indicator | |
| | - Services | | - Burn rate | | tracking | |
| | - Beneficiaries | | - Cost variance | | - Targets | |
| | - Activities | | - Allocation | | - Actuals | |
| | | | | | - Variance | |
| | Outcomes | | Human Resources | | | |
| | - Behaviour | | - Productivity | | Compliance | |
| | change | | - Turnover | | - Requirements | |
| | - Condition | | - Capacity | | - Evidence | |
| | improvement | | | | | |
| | | | Process | | Impact | |
| | Impact | | - Cycle times | | demonstration | |
| | - Attribution | | - Error rates | | - Narratives | |
| | - Long-term | | - Throughput | | - Case studies | |
| | change | | | | | |
| +------------------+ +------------------+ +------------------+ |
| | | | |
| +--------------------+--------------------+ |
| | |
| +-----------v-----------+ |
| | INTEGRATED VIEW | |
| | Cost per outcome | |
| | Value for money | |
| | Strategic dashboard | |
| +-----------------------+ |
| |
+--------------------------------------------------------------------+

Figure 2: Three primary analytics domains and their integration

The most valuable analytical capability connects these domains: understanding cost per outcome, demonstrating value for money, and linking operational decisions to programme results. An organisation that tracks programme outputs separately from operational costs cannot answer whether a more expensive approach produces proportionally better outcomes.

Self-service versus centralised analytics

Analytics capability distributes across a spectrum from fully centralised (all analysis performed by a dedicated team) to fully self-service (all users perform their own analysis). Neither extreme serves most organisations well.

Centralised analytics concentrates analytical skills and ensures consistency but creates bottlenecks and delays. When every analytical question requires a request to the analytics team, the team becomes overwhelmed, requesters wait days or weeks for answers, and the organisation’s analytical capacity is limited by team size. Centralised models suit organisations with few analytical questions, where consistency matters more than speed, or where data sensitivity requires restricted access.

Self-service analytics distributes capability to business users, enabling immediate answers to ad-hoc questions but risking inconsistency and misinterpretation. When anyone can query data and create reports, different users may calculate the same metric differently, misunderstand data limitations, or draw incorrect conclusions. Self-service models suit organisations with many diverse analytical needs, strong data literacy among staff, and robust data governance.

+--------------------------------------------------------------------+
| ANALYTICS DELIVERY SPECTRUM |
+--------------------------------------------------------------------+
| |
| CENTRALISED SELF-SERVICE |
| <-------------------------------------------------------------> |
| |
| +-------------+ +-------------+ +-------------+ +----------+ |
| | | | | | | | | |
| | Full | | Centre of | | Embedded | | Full | |
| | Central | | Excellence | | Analysts | | Self- | |
| | | | + Support | | + Platform | | Service | |
| +------+------+ +------+------+ +------+------+ +-----+----+ |
| | | | | |
| v v v v |
| All requests Standards + Analysts in Users query |
| go to central training for business units data directly |
| team supported with central |
| self-service governance |
| |
| CHARACTERISTICS: |
| |
| Consistency High High Medium Low |
| Speed Low Medium High High |
| Scalability Low Medium High High |
| Data literacy Not required Some required Required High |
| Governance Implicit Explicit Explicit Heavy |
| |
+--------------------------------------------------------------------+

Figure 3: Analytics delivery models from centralised to self-service

Most organisations benefit from a hybrid model combining centralised governance with distributed execution. The centre of excellence model maintains a central team that establishes standards, creates canonical metrics, manages the data platform, and provides training, while enabling trained users to perform routine analysis independently. This model requires investment in both central capability (to build and maintain infrastructure) and distributed capability (to train and support business users).

The appropriate position on this spectrum depends on organisational factors. Smaller organisations with limited analytical demand function adequately with centralised models; the bottleneck remains manageable. Larger organisations with diverse analytical needs require self-service capability to scale; they cannot hire enough central analysts to meet demand. Organisations handling sensitive data (protection, safeguarding) require tighter central control regardless of size.

Data literacy and organisational readiness

Analytics capability depends not only on technology and data infrastructure but also on organisational readiness to consume and act on analytical outputs. Data literacy encompasses the skills staff need to interpret data correctly and the organisational culture that values evidence-based decision-making.

Data literacy operates at multiple levels. Basic data literacy includes reading charts and tables, understanding percentage change versus percentage point change, recognising that correlation does not imply causation, and identifying when sample sizes are too small for reliable conclusions. Intermediate data literacy includes understanding statistical significance, recognising sampling bias, interpreting confidence intervals, and identifying confounding variables. Advanced data literacy includes designing analytical questions, selecting appropriate methods, validating model assumptions, and communicating uncertainty appropriately.

Assessing organisational data literacy reveals where investment is needed. Common indicators of low data literacy include: staff requesting “the number” without specifying dimensions or time periods, decisions made on single data points without trend context, correlation presented as causation, and rejection of data that contradicts intuition without investigation. Common indicators of adequate data literacy include: staff formulating specific analytical questions, appropriate scepticism about surprising results, recognition of data limitations, and requests for confidence intervals or margins of error.

Building data literacy requires sustained investment across multiple channels. Formal training addresses foundational concepts but rarely changes behaviour without reinforcement. Embedding data in decision processes (requiring analytical support for budget requests, including data review in programme meetings) normalises data use. Creating feedback loops where decisions are evaluated against predictions builds appreciation for analytical rigour. Celebrating data-informed decisions (and examining data-contradicted decisions without blame) shapes culture.

Data literacy investment produces compounding returns. Staff who understand data ask better questions, creating demand for better analytics. Better analytics produces more useful outputs, increasing staff engagement. Increased engagement builds further literacy. Conversely, organisations that invest in analytics technology without addressing literacy find that sophisticated tools go unused and expensive infrastructure delivers minimal value.

Analytics governance

Analytics governance establishes who can access what data, what analyses require approval, how metrics are defined, and how analytical outputs are distributed. Governance prevents both chaos (inconsistent metrics, conflicting analyses) and paralysis (excessive approval requirements blocking timely analysis).

Data access governance determines which roles can query which data. Access governance intersects with data classification: highly confidential data (protection cases, HR records) requires strict access controls regardless of analytical purpose, while operational data (activity counts, expenditure summaries) may be broadly accessible. Access governance must balance protection with utility; overly restrictive access prevents legitimate analysis while overly permissive access creates breach risk.

Metric governance ensures consistent definitions across the organisation. When different teams calculate “beneficiaries reached” differently (unique individuals versus service contacts, direct versus indirect), aggregated numbers become meaningless. Metric governance establishes canonical definitions, documents calculation methods, and identifies the authoritative source for each metric. The data governance function typically owns metric governance, with analytical staff contributing definitions and business owners approving them.

Output governance determines which analyses require review before distribution. Routine dashboards with established metrics may require no approval. Ad-hoc analyses answering novel questions may require review by someone who understands the data’s limitations. Analyses informing external communications or donor reports typically require multiple approvals. Output governance prevents embarrassing errors but must not create bottlenecks that delay time-sensitive decisions.

+------------------------------------------------------------------+
| ANALYTICS GOVERNANCE MODEL |
+------------------------------------------------------------------+
| |
| +---------------------------+ |
| | DATA GOVERNANCE | Owns: Data access, metric |
| | COUNCIL | definitions, data quality |
| +-------------+-------------+ |
| | |
| v |
| +-------------+-------------+ |
| | ANALYTICS LEAD / | Owns: Tool standards, training, |
| | CENTRE OF EXCELLENCE | output review, platform |
| +-------------+-------------+ |
| | |
| +--------+--------+ |
| | | |
| v v |
| +----+----+ +----+----+ |
| | Domain | | Domain | Own: Domain-specific metrics, |
| | Analyst | | Analyst | business interpretation |
| | (Progs) | | (Ops) | |
| +----+----+ +----+----+ |
| | | |
| v v |
| +----+----+ +----+----+ |
| | Self- | | Self- | Consume: Governed data, |
| | Service | | Service | standard reports, |
| | Users | | Users | ad-hoc analysis |
| +---------+ +---------+ |
| |
| GOVERNANCE BOUNDARIES: |
| +---------------------------------------------------------+ |
| | Data access | Controlled by data classification and | |
| | | role-based permissions | |
| +-----------------+---------------------------------------+ |
| | Metric changes | Require data governance approval | |
| +-----------------+---------------------------------------+ |
| | New dashboards | Require analytics lead review | |
| +-----------------+---------------------------------------+ |
| | External outputs| Require domain owner + comms approval | |
| +-----------------+---------------------------------------+ |
| |
+------------------------------------------------------------------+

Figure 4: Analytics governance structure showing accountability levels

Governance overhead should match analytical risk. Low-risk analyses (routine operational reports using established metrics) require minimal governance. High-risk analyses (novel metrics informing strategy, externally published figures, analyses affecting resource allocation) require proportionally more review. Applying heavy governance uniformly either blocks routine work or fails to catch significant errors.

Building analytics capability

Organisations build analytics capability through coordinated investment across four dimensions: people (skills and roles), process (workflows and governance), technology (tools and infrastructure), and data (quality and accessibility). Investing in one dimension without the others produces limited returns.

People capability includes both dedicated analytical roles and distributed skills among business staff. Dedicated roles range from data engineers (building pipelines and infrastructure) through analysts (performing analysis and creating reports) to data scientists (building predictive models). Distributed skills include basic data literacy for all staff plus intermediate skills for power users who perform self-service analysis. Organisations with limited resources often cannot afford dedicated data engineering and data science roles; they must build these capabilities within generalist IT and analyst positions.

Process capability includes both analytical workflows (how questions become analyses become decisions) and governance processes (how access is controlled, metrics are defined, outputs are reviewed). Immature analytical processes exhibit long cycle times from question to answer, inconsistent metric definitions, and limited connection between analytical outputs and decisions. Mature analytical processes exhibit clear intake mechanisms, defined service levels, consistent methodology, and documented decision impact.

Technology capability includes tools for data storage, transformation, analysis, and visualisation. Technology investment ranges from spreadsheets and basic databases (minimal investment, limited scale) through business intelligence platforms (moderate investment, good scale) to data platforms with ML capability (significant investment, maximum capability). Technology investment must match organisational readiness; sophisticated tools in organisations lacking data literacy produce little value.

Data capability includes both the availability of data (is the necessary data collected and accessible?) and the quality of data (is it accurate, complete, timely, consistent?). Organisations frequently underestimate data capability requirements. Analytical projects fail not because of inadequate tools or skills but because the required data does not exist, cannot be accessed, or is too poor quality to support reliable analysis.

+--------------------------------------------------------------------+
| ANALYTICS CAPABILITY BUILDING |
+--------------------------------------------------------------------+
| |
| STAGE 1: FOUNDATION |
| +---------------------------------------------------------------+ |
| | People: Basic data literacy training for key staff | |
| | Process: Define core metrics, establish reporting cadence | |
| | Technology: Spreadsheets, basic database, simple dashboards | |
| | Data: Identify critical data sources, assess quality | |
| | Investment: 0.5 FTE analyst, minimal technology budget | |
| | Timeline: 6-12 months | |
| +---------------------------------------------------------------+ |
| | |
| v |
| STAGE 2: ESTABLISHED |
| +---------------------------------------------------------------+ |
| | People: Dedicated analyst role, power user training | |
| | Process: Intake process, metric governance, self-service | |
| | Technology: BI platform, basic data warehouse | |
| | Data: Core data warehouse, documented quality, catalogue | |
| | Investment: 1-2 FTE analyst, BI platform costs | |
| | Timeline: 12-24 months from Stage 1 | |
| +---------------------------------------------------------------+ |
| | |
| v |
| STAGE 3: ADVANCED |
| +---------------------------------------------------------------+ |
| | People: Analytics team with specialisation, broad literacy | |
| | Process: Centre of excellence, embedded analysts, feedback | |
| | Technology: Modern data platform, advanced analytics tools | |
| | Data: Integrated data platform, lineage, quality monitoring | |
| | Investment: 3-5 FTE analytics team, significant tech budget | |
| | Timeline: 24-36 months from Stage 2 | |
| +---------------------------------------------------------------+ |
| |
+--------------------------------------------------------------------+

Figure 5: Analytics capability building stages with investment requirements

Capability building proceeds iteratively, not linearly. An organisation does not complete “people” before addressing “process.” Each iteration strengthens all four dimensions proportionally, with each stage building on the previous. Attempting to skip stages (investing in advanced technology without foundational data capability, hiring data scientists without basic analytical processes) produces expensive failures.

Analytics roadmap development

An analytics roadmap translates strategic intent into sequenced initiatives that build capability over time. Roadmap development begins with understanding current state, defining target state, identifying gaps, and prioritising initiatives to close gaps.

Current state assessment examines existing capability across all four dimensions. Assessment methods include capability surveys (what tools exist, what skills are present), usage analysis (what reports are produced, what questions are asked), stakeholder interviews (what decisions lack data support, what frustrations exist), and output review (what is the quality of existing analytical outputs). Honest assessment requires acknowledging limitations; organisations frequently overestimate their analytical capability.

Target state definition articulates what analytical capability the organisation needs. Target state should align with strategic priorities: if the strategy emphasises programme quality, target state should emphasise outcome analytics; if the strategy emphasises operational efficiency, target state should emphasise process analytics. Target state should be specific enough to guide investment decisions: “better analytics” provides no guidance; “ability to report beneficiary outcomes by intervention type within 30 days of quarter close” guides investment.

Gap analysis identifies specific deficiencies between current and target state. Gaps may be in any capability dimension: missing skills (no one understands statistical analysis), inadequate processes (no metric governance), insufficient technology (no data warehouse), or unavailable data (outcome data not collected). Gap analysis should prioritise gaps by impact on target state; not all gaps require immediate attention.

Initiative prioritisation sequences investments to close gaps efficiently. Prioritisation considers dependencies (data quality must improve before advanced analytics becomes useful), quick wins (initiatives delivering visible value rapidly build momentum), and resource constraints (what can the organisation realistically execute). A common pattern: begin with high-visibility dashboards that demonstrate value and build support, then invest in underlying data infrastructure, then expand analytical capability.

A worked example illustrates roadmap development:

An organisation with 200 staff across 5 countries seeks to improve programme outcome reporting. Current state assessment reveals: programme data in multiple disconnected spreadsheets, no dedicated analyst (M&E officer produces reports manually), inconsistent indicator definitions across countries, 3-month lag from data collection to consolidated reporting, and leadership frustrated by inability to answer basic outcome questions.

Target state definition: consolidated outcome dashboard updated monthly, consistent indicator definitions across all programmes, ability to segment outcomes by intervention type, geography, and beneficiary characteristics.

Gap analysis identifies: data integration gap (data in disconnected sources), skills gap (no one can build dashboards), process gap (no metric governance), technology gap (no BI platform), and data quality gap (inconsistent definitions).

Initiative prioritisation produces this sequence:

Year 1, Q1-Q2: Establish metric governance, define canonical indicator definitions, document calculation methods. Low cost, addresses root cause of inconsistency.

Year 1, Q3-Q4: Deploy BI platform (Metabase or similar open source), train M&E officer on dashboard creation, build initial dashboard from existing data. Moderate cost, delivers visible output.

Year 2, Q1-Q2: Build data integration pipeline consolidating country spreadsheets into central database. Moderate cost, addresses data integration gap.

Year 2, Q3-Q4: Expand dashboard to include segmentation, train country M&E staff on self-service, establish monthly review cadence. Low cost, builds capability.

Total investment over 24 months: approximately 0.25 FTE dedicated time plus BI platform costs (zero if open source self-hosted, modest if cloud-hosted).

Measuring analytics value

Analytics investment requires justification, yet measuring analytics value presents methodological challenges. Attribution is difficult: if a decision improved outcomes, was it because of analytical insight or other factors? Counter-factuals are unavailable: what would have happened without the analysis? Despite these challenges, organisations can assess analytics value through multiple lenses.

Usage metrics indicate whether analytical outputs are consumed. Dashboard views, report downloads, and query volumes demonstrate engagement. Low usage suggests outputs do not address user needs; high usage suggests value. Usage metrics do not measure impact but provide leading indicators of potential value.

Decision influence tracks whether analytical outputs affected decisions. This requires documenting decisions and their analytical inputs. If budget allocation changed because analysis revealed cost per outcome variation, that decision demonstrates analytical influence. Decision influence is harder to measure than usage but more meaningful.

Outcome improvement measures whether decisions informed by analytics produced better results than prior decisions. This requires longitudinal tracking and ideally comparison groups. If programmes redesigned based on outcome analysis show improved results compared to unchanged programmes, analytics contributed to improvement. Outcome improvement provides the strongest value evidence but takes longest to accumulate.

Efficiency gains measure whether analytics reduces time or cost for existing activities. If consolidated dashboards replace manual report compilation, time savings are quantifiable. If self-service analytics reduces bottlenecks, throughput increases are measurable. Efficiency gains provide concrete justification but capture only part of analytics value.

A balanced value measurement approach tracks indicators across all four lenses:

Value LensMetricsCollection Method
UsageDashboard views, active users, query volumePlatform analytics
Decision influenceDecisions citing analytical inputsDecision documentation
Outcome improvementBefore/after outcome comparisonProgramme monitoring
EfficiencyTime saved, reports automatedTime tracking

Value measurement should acknowledge uncertainty. Analytics value often manifests in decision quality improvement that is difficult to quantify precisely. Demanding rigorous ROI calculation for every analytical investment creates perverse incentives toward easily measurable but lower-value work. Organisations should measure what they can while accepting that some value remains unquantified.

Ethical considerations in analytics

Analytics in mission-driven contexts raises ethical considerations beyond those in commercial settings. Power imbalances between organisations and affected populations, vulnerability of beneficiaries, and potential for harm through misuse or misinterpretation require explicit attention.

Algorithmic bias affects any analysis using historical data to inform decisions about people. If historical programme selection exhibited bias (intentional or unintentional), analysis of that data perpetuates bias. Predictive models trained on biased data produce biased predictions. Organisations using analytics to inform beneficiary targeting, resource allocation, or programme design must examine data for historical bias and consider whether analytical outputs might disadvantage particular groups.

Interpretation risk arises when analytical outputs are misunderstood or misapplied. A correlation between beneficiary characteristics and outcomes does not imply causation; excluding beneficiaries with those characteristics based on correlation would be both ethically problematic and analytically incorrect. Organisations must ensure that analytical outputs include appropriate caveats and that consumers understand limitations.

Surveillance concerns emerge when extensive data collection enables detailed monitoring of individuals. Beneficiary tracking systems that enable outcome measurement also enable surveillance. The same data that demonstrates programme impact could, in hostile hands, identify vulnerable individuals. Analytics strategy must consider what data is collected, how long it is retained, and who can access it, with explicit consideration of misuse scenarios.

Consent and participation questions whether affected communities have meaningful input into how data about them is collected, analysed, and used. Best practice involves community engagement in analytical priorities, transparency about how data informs decisions, and feedback mechanisms allowing communities to challenge analytical conclusions. Full implementation of participatory analytics is resource-intensive; organisations should at minimum avoid analytical practices that communities would object to if they understood them.

These considerations do not prohibit analytics but require thoughtful implementation. Analytics strategy should include explicit ethical review processes for high-stakes analyses, documentation of bias assessment methods, guidelines for interpretation and communication, and data governance aligned with protection principles.

Implementation considerations

For organisations with limited IT capacity

Small organisations or those with minimal analytical capability can build useful analytics with modest investment. The foundation is not sophisticated technology but clear thinking about what questions matter and disciplined data collection to answer them.

Start with the three to five most important questions leadership asks repeatedly. These might include: how many people are we reaching, what are we spending, are outcomes improving? Design data collection to answer these questions specifically. Implement basic tracking in spreadsheets with consistent structure (same columns, same definitions, same update schedule across all programmes).

Create simple dashboards using free tools. Google Sheets with pivot tables and charts provides basic capability. Metabase or Apache Superset offer more sophisticated visualisation with minimal infrastructure if someone has Linux administration skills. Power BI free tier provides individual-use capability. The goal is consistent, timely answers to priority questions, not sophisticated analysis.

Invest in data literacy before technology. Time spent ensuring staff understand percentages, trends, and basic statistical concepts returns more value than time spent implementing complex tools that users cannot interpret. One staff member who can translate data into actionable insight provides more value than expensive infrastructure producing unused reports.

For organisations with established functions

Organisations with dedicated analytical roles and basic infrastructure can extend capability through standardisation, self-service, and selective advanced analytics.

Standardisation addresses the common problem of inconsistent metrics and redundant reports. Invest in metric governance: document canonical definitions, establish the single authoritative source for each metric, and retire competing definitions. Consolidate redundant reports into unified dashboards. Standardisation is politically difficult (users resist changing “their” reports) but essential for scaling analytical capability.

Self-service analytics extends capacity beyond the central team. Identify high-frequency, low-complexity analytical tasks that business users could perform independently. Provide training, governed data access, and appropriate tools. Measure self-service adoption and continuously improve training and tools based on usage patterns. Successful self-service requires sustained investment in training and support, not merely tool deployment.

Selective advanced analytics applies predictive or prescriptive techniques to high-value problems where sufficient data exists. Good candidates include: forecasting programme demand (where historical patterns predict future needs), predicting project risk (where past project data reveals risk factors), and optimising resource allocation (where sufficient outcome data enables comparison). Poor candidates include: any problem lacking historical data, any context with rapid change invalidating historical patterns, and any situation where model errors carry high humanitarian cost.

For organisations operating in field contexts

Analytics strategy in field operations must accommodate intermittent connectivity, limited infrastructure, and distributed data collection.

Design for offline-first data collection. Field staff should collect and review data without requiring connectivity. Synchronisation should handle conflicts gracefully when multiple offline collectors converge. Analysis should tolerate data latency; dashboards showing “data as of last sync” with clear timestamps prevent misinterpretation.

Prioritise lightweight, mobile-accessible analytical tools. Field staff accessing dashboards over mobile networks require fast-loading, responsive interfaces. Heavy dashboards with complex visualisations may be inaccessible. Consider what insights field staff need and optimise delivery for their access context.

Build analytical capability at intermediate levels (regional hubs, country offices), not only at headquarters. Centralised analytics creates bottlenecks and misses local context. Regional analysts who understand local conditions, speak local languages, and can engage directly with field staff produce better insights than remote analysts working only with submitted data.

Balance analytical ambition against data quality constraints. Field data collection faces challenges headquarters rarely encounter: inconsistent connectivity, limited training opportunities, high staff turnover, competing priorities during active response. Sophisticated analytical models require data quality that field collection may not achieve. Simple, robust analyses often provide more reliable insight than complex analyses based on uncertain data.

See also