Skip to main content

Humanitarian Data Interoperability

Humanitarian data interoperability encompasses the standards, protocols, and architectural patterns that enable data exchange between organisations responding to crises. The humanitarian sector operates through coordination mechanisms where multiple agencies deliver services to the same populations, making shared understanding of locations, populations, activities, and needs essential for effective response. Interoperability transforms isolated datasets into connected information that supports coordination, reduces duplication, and enables collective analysis.

Interoperability
The capability of systems and organisations to exchange data and use the information exchanged without manual transformation or interpretation.
HXL (Humanitarian Exchange Language)
A hashtag-based standard for tagging spreadsheet columns to enable automated processing and combination of humanitarian data.
IATI (International Aid Transparency Initiative)
A publishing standard for aid activity information that enables tracking of funding flows and programme activities across the development and humanitarian sectors.
HDX (Humanitarian Data Exchange)
A platform operated by OCHA’s Centre for Humanitarian Data for sharing humanitarian datasets, providing APIs for programmatic access.
Common Operational Datasets (CODs)
Authoritative reference datasets for humanitarian response, including administrative boundaries, population statistics, and facility locations.
P-code
A standardised geographic identifier used in humanitarian operations, structured hierarchically to represent administrative divisions.

The humanitarian data ecosystem

The humanitarian data ecosystem comprises multiple interconnected standards, platforms, and data flows that collectively enable information sharing across the sector. Understanding this ecosystem requires recognising both the technical standards that define data structures and the institutional arrangements that govern data sharing.

+--------------------------------------------------------------------------+
| HUMANITARIAN DATA ECOSYSTEM |
+--------------------------------------------------------------------------+
| |
| +------------------+ +------------------+ +------------------+ |
| | DATA PRODUCERS | | DATA PLATFORMS | | DATA CONSUMERS | |
| +------------------+ +------------------+ +------------------+ |
| | | | | | | |
| | - UN Agencies | | | | - Cluster Leads | |
| | - INGOs |---->| HDX |---->| - OCHA | |
| | - Local NGOs | | (datasets) | | - Donors | |
| | - Governments | | | | - Researchers | |
| | - Red Cross/ | +------------------+ | | |
| | Crescent | | +------------------+ |
| +------------------+ | ^ |
| | v | |
| | +------------------+ | |
| | | | | |
| +------------->| IATI Registry |---------------+ |
| | | (activities) | | |
| | | | | |
| | +------------------+ | |
| | | | |
| v v | |
| +------------------+ +------------------+ | |
| | | | | | |
| | Organisation | | CODs |--------------+ |
| | Systems |<----| (reference) | |
| | | | | |
| | - Case Mgmt | +------------------+ |
| | - M&E | | |
| | - Beneficiary | | |
| | Registration |<-------------+ |
| +------------------+ (P-codes, boundaries) |
| |
+--------------------------------------------------------------------------+
| STANDARDS LAYER |
| +------------+ +------------+ +------------+ +------------+ |
| | HXL | | IATI | | ISO | | Cluster | |
| | (tagging) | | (activity) | | (codes) | | (sector) | |
| +------------+ +------------+ +------------+ +------------+ |
+--------------------------------------------------------------------------+

Figure 1: Humanitarian data ecosystem showing producers, platforms, consumers, and underlying standards

Data flows through this ecosystem in multiple directions. Organisations publish operational data to coordination platforms while simultaneously consuming reference data that provides common frameworks for locations and populations. The standards layer provides the shared vocabulary that makes these exchanges meaningful.

Humanitarian Exchange Language

HXL provides a lightweight mechanism for adding machine-readable tags to spreadsheet data without altering the human-readable structure. Unlike traditional data standards that require reformatting data into specific schemas, HXL works by inserting a row of hashtags between the header row and data rows. This approach preserves existing workflows while enabling automated processing.

A standard spreadsheet containing programme data might have columns labelled “Region”, “District”, “Number of Beneficiaries”, and “Date of Distribution”. Different organisations use different column names for equivalent concepts. HXL addresses this variance by adding standardised tags that identify what each column contains regardless of its human-readable label.

+---------------------------------------------------------------------------+
| HXL TAGGING STRUCTURE |
+---------------------------------------------------------------------------+
| |
| ORIGINAL SPREADSHEET: |
| +----------+-----------+------------------+------------------+ |
| | Region | District | Beneficiaries | Distribution Date| |
| +----------+-----------+------------------+------------------+ |
| | North | Gulu | 1,250 | 2024-03-15 | |
| | North | Kitgum | 890 | 2024-03-16 | |
| +----------+-----------+------------------+------------------+ |
| |
| WITH HXL TAGS: |
| +----------+-----------+------------------+------------------+ |
| | Region | District | Beneficiaries | Distribution Date| <- Header|
| +----------+-----------+------------------+------------------+ |
| | #adm1 | #adm2 | #affected | #date | <- HXL |
| | +name | +name | +reached | +reported | <- Attrs |
| +----------+-----------+------------------+------------------+ |
| | North | Gulu | 1,250 | 2024-03-15 | <- Data |
| | North | Kitgum | 890 | 2024-03-16 | |
| +----------+-----------+------------------+------------------+ |
| |
+---------------------------------------------------------------------------+

Figure 2: HXL tagging structure showing hashtag row insertion between headers and data

HXL hashtags follow a consistent structure. The core hashtag identifies the data type, while attributes (prefixed with +) provide additional specificity. The tag #adm1+name indicates an administrative level 1 name, while #adm1+code would indicate a code for the same administrative level. This attribute system allows granular description without proliferating base hashtags.

The core HXL vocabulary covers humanitarian data domains systematically. Geographic tags include #country, #adm1 through #adm5 for administrative levels, #geo for coordinates, and #loc for general locations. Population and beneficiary tags include #affected, #inneed, #targeted, #reached, and demographic attributes like +f for female, +m for male, and +children for age groups. Activity tags include #org for organisations, #sector for clusters, #activity for interventions, and #output for deliverables. Temporal tags use #date with attributes like +reported, +start, and +end.

HXL implementation in organisational systems

Implementing HXL requires decisions about where tagging occurs in the data pipeline. Three patterns exist: source tagging, transformation tagging, and export tagging.

Source tagging embeds HXL in data collection instruments. Mobile data collection tools like KoboToolbox support HXL tags in form definitions, producing tagged data at the point of collection. This approach ensures consistent tagging but requires investment in form design and limits flexibility when data requirements change.

Transformation tagging applies HXL during data processing. When data moves from operational systems to analytical systems, transformation scripts add HXL tags based on field mappings. This approach keeps operational systems unchanged while enabling tagged exports for coordination purposes. The transformation mapping becomes a maintained artefact that requires updates when source schemas change.

Export tagging adds HXL at the point of data sharing. Export functions include logic to map internal field names to HXL tags when generating files for external consumption. This approach concentrates tagging logic in export modules and adapts easily to different recipient requirements, but duplicates mapping logic if multiple export paths exist.

For organisations collecting field data, source tagging through KoboToolbox or ODK produces immediate benefits with minimal additional effort. The form definition includes HXL tags that flow through to exported data:

# KoboToolbox form excerpt with HXL tags
survey:
- type: select_one region
name: region
label: Region
hxl: "#adm1+name"
- type: select_one district
name: district
label: District
hxl: "#adm2+name"
- type: integer
name: beneficiaries_reached
label: Number of beneficiaries reached
hxl: "#affected+reached"
- type: date
name: distribution_date
label: Date of distribution
hxl: "#date+activity"

IATI standard

The International Aid Transparency Initiative standard structures information about aid activities, enabling tracking of funding flows from donor commitments through implementation to results. IATI data uses XML format with a defined schema that captures organisational, financial, and programmatic information.

An IATI activity record contains several required elements. The iati-identifier provides a globally unique identifier combining the reporting organisation’s identifier with the organisation’s internal activity identifier. The reporting-org element identifies who published the data. The title and description elements provide human-readable information. The activity-status element uses a codelist to indicate whether the activity is in pipeline, implementation, completed, or other states.

Financial information in IATI captures transactions at granular level. Each transaction includes a transaction type (commitment, disbursement, expenditure, incoming funds), a value with currency and value date, a transaction date, and optionally provider and receiver organisation details. This structure enables reconstruction of funding flows across the aid chain.

<iati-activity>
<iati-identifier>GB-CHC-123456-WASH-UG-2024</iati-identifier>
<reporting-org ref="GB-CHC-123456" type="21">
<narrative>Example Humanitarian Organisation</narrative>
</reporting-org>
<title>
<narrative>WASH Response Northern Uganda</narrative>
</title>
<activity-status code="2"/><!-- Implementation -->
<activity-date type="1" iso-date="2024-01-01"/><!-- Planned start -->
<activity-date type="3" iso-date="2024-01-15"/><!-- Actual start -->
<recipient-country code="UG" percentage="100"/>
<sector code="14030" vocabulary="1"/><!-- Basic drinking water -->
<transaction>
<transaction-type code="2"/><!-- Commitment -->
<transaction-date iso-date="2023-11-15"/>
<value currency="USD" value-date="2023-11-15">500000</value>
<provider-org ref="XM-DAC-12-1" type="10">
<narrative>Donor Agency</narrative>
</provider-org>
</transaction>
<transaction>
<transaction-type code="3"/><!-- Disbursement -->
<transaction-date iso-date="2024-01-20"/>
<value currency="USD" value-date="2024-01-20">150000</value>
</transaction>
<location>
<location-reach code="1"/><!-- Activity -->
<name><narrative>Gulu District</narrative></name>
<administrative vocabulary="G1" level="2" code="UG-G1-01"/>
<point srsName="http://www.opengis.net/def/crs/EPSG/0/4326">
<pos>2.7747 32.2990</pos>
</point>
</location>
<result type="1"><!-- Output -->
<title><narrative>Water points rehabilitated</narrative></title>
<indicator measure="1"><!-- Unit -->
<title><narrative>Number of water points</narrative></title>
<baseline year="2024" value="0"/>
<period>
<period-start iso-date="2024-01-01"/>
<period-end iso-date="2024-06-30"/>
<target value="50"/>
<actual value="23"/>
</period>
</indicator>
</result>
</iati-activity>

Publishing IATI data requires registration with the IATI Registry and establishing a publishing workflow. Organisations must choose between manual file creation and automated publishing from internal systems. Manual approaches suit organisations with small activity portfolios or limited technical capacity. Automated approaches extract data from grants management or programme management systems and transform it to IATI XML.

The IATI Datastore provides query access to published data. API endpoints support filtering by reporting organisation, recipient country, sector, and date ranges. A query retrieving all WASH sector activities in Uganda from the past year returns structured data that analysis tools can process:

GET https://api.iatistandard.org/datastore/activity/select
?q=recipient_country_code:UG AND sector_code:140*
&fq=activity_date_start_actual_f:[2023-01-01T00:00:00Z TO *]
&fl=iati_identifier,title_narrative,budget_value,activity_status_code
&rows=100

Humanitarian Data Exchange

The Humanitarian Data Exchange serves as the primary platform for sharing humanitarian datasets. HDX hosts datasets contributed by UN agencies, NGOs, governments, and research institutions, organised by geographic location and topic. The platform provides both web interface for discovery and APIs for programmatic access.

HDX organises content through datasets, resources, and organisations. A dataset represents a logical grouping of related data files, such as “Uganda Administrative Boundaries” or “3W Operational Presence Northern Uganda”. Each dataset contains one or more resources, which are the actual downloadable files in formats like CSV, Excel, GeoJSON, or shapefiles. Organisations are the entities responsible for maintaining datasets.

The HDX API follows CKAN conventions, the open-source data portal software underlying the platform. Key endpoints enable searching datasets, retrieving metadata, and downloading resources:

import requests
# Search for datasets
response = requests.get(
'https://data.humdata.org/api/3/action/package_search',
params={
'q': 'title:3W',
'fq': 'groups:uga', # Uganda
'rows': 10
}
)
datasets = response.json()['result']['results']
# Get dataset metadata
dataset_id = 'uganda-operational-presence-3w'
response = requests.get(
'https://data.humdata.org/api/3/action/package_show',
params={'id': dataset_id}
)
dataset = response.json()['result']
# Download resource
resource_url = dataset['resources'][0]['url']
data = requests.get(resource_url).content

HDX implements a freshness classification that indicates how recently datasets were updated. Fresh datasets have been updated within the expected frequency. Due datasets are approaching their expected update. Overdue datasets have not been updated within their expected frequency. Archived datasets are no longer being maintained. This classification helps consumers assess data currency.

Organisations publishing to HDX must consider update automation. Manual uploads work for infrequently changing datasets but become burdensome for operational data requiring regular updates. The HDX API supports programmatic updates that enable automated publishing workflows:

import requests
from datetime import datetime
HDX_API_KEY = 'your-api-key'
DATASET_ID = 'org-operational-presence-3w'
# Update dataset metadata
requests.post(
'https://data.humdata.org/api/3/action/package_patch',
headers={'Authorization': HDX_API_KEY},
json={
'id': DATASET_ID,
'dataset_date': datetime.now().strftime('[%Y-%m-%d TO %Y-%m-%d]')
}
)
# Upload new resource version
with open('3w_data_current.csv', 'rb') as f:
requests.post(
'https://data.humdata.org/api/3/action/resource_create',
headers={'Authorization': HDX_API_KEY},
data={
'package_id': DATASET_ID,
'name': f'3W Data {datetime.now().strftime("%Y-%m-%d")}',
'format': 'CSV',
'description': 'Weekly operational presence update'
},
files={'upload': f}
)

Common Operational Datasets

Common Operational Datasets provide authoritative reference data for humanitarian operations. CODs establish shared foundations that enable data from different organisations to be combined and compared. The core COD types address administrative boundaries, population statistics, and humanitarian infrastructure.

Administrative boundary CODs define the geographic units used for coordination, reporting, and analysis. These datasets include polygon geometries for each administrative level and the P-codes that uniquely identify each unit. P-codes follow a hierarchical structure where each level incorporates its parent’s code. Uganda’s administrative structure illustrates this hierarchy: “UG” identifies the country, “UG301” identifies Gulu District, and “UG30101” identifies a sub-county within Gulu.

+-------------------------------------------------------------------------+
| P-CODE HIERARCHY EXAMPLE |
+-------------------------------------------------------------------------+
| |
| Country: UG |
| | |
| Region: UG3 (Northern) |
| | |
| District: UG301 (Gulu) UG302 (Kitgum) UG303 (Pader) |
| | |
| Sub-county: UG30101 UG30102 UG30103 |
| | |
| Parish: UG3010101 UG3010102 UG3010103 |
| |
+-------------------------------------------------------------------------+
| NAMING CONVENTION: |
| [Country ISO2][Region#][District##][Sub-county##][Parish##] |
| |
| Maximum depth varies by country based on administrative structure |
+-------------------------------------------------------------------------+

Figure 3: P-code hierarchical structure showing administrative level encoding

Population CODs provide demographic baseline data disaggregated by administrative unit. These datasets derive from census data, population projections, or estimation methodologies, and include metadata about the source and methodology. Population data enables calculation of coverage ratios, planning figures, and needs estimates.

Humanitarian infrastructure CODs identify facilities relevant to humanitarian operations, including health facilities, schools, water points, and markets. Each facility record includes location coordinates, facility type, operational status, and the administrative unit containing the facility.

COD data flows into organisational systems through integration points that vary by system architecture:

+-------------------------------------------------------------------------+
| COD DATA FLOW |
+-------------------------------------------------------------------------+
| |
| +------------------+ |
| | HDX Platform | |
| | | |
| | - Admin COD | |
| | - Population COD | |
| | - Facilities COD | |
| +--------+---------+ |
| | |
| | Download / API |
| v |
| +--------+---------+ |
| | COD Repository | (Local cache with version control) |
| | | |
| | /cod/ | |
| | admin_v2.3/ | |
| | population_v1/ | |
| | health_fac_v4/ | |
| +--------+---------+ |
| | |
| +-----+-----+-----+-----+ |
| | | | |
| v v v |
| +--+---+ +---+---+ +---+---+ |
| | M&E | | Case | | GIS | |
| | Sys | | Mgmt | | Sys | |
| +------+ +-------+ +-------+ |
| |
| Integration methods: |
| - Direct database import (PostGIS, SQL Server) |
| - API service (internal REST endpoint) |
| - File system mount (read-only share) |
| - Lookup table synchronisation |
| |
+-------------------------------------------------------------------------+

Figure 4: COD data flow from HDX through organisational systems

Maintaining COD currency requires monitoring HDX for updates and propagating changes through dependent systems. Administrative boundary changes create particular complexity because historical data linked to old boundaries must be handled appropriately. Common approaches include maintaining boundary version history, storing both old and new P-codes during transition periods, and documenting boundary change mappings.

Interoperability patterns

Humanitarian data interoperability employs several architectural patterns depending on the use case, participant capabilities, and data sensitivity requirements.

The publish-subscribe pattern underlies platform-mediated sharing through HDX and similar repositories. Data producers publish datasets to a central platform where consumers discover and retrieve data independently. This pattern scales well because producers and consumers need not coordinate directly, but introduces latency between publication and consumption. Data currency depends on publisher update discipline.

+--------------------------------------------------------------------------+
| PUBLISH-SUBSCRIBE PATTERN |
+--------------------------------------------------------------------------+
| |
| Publishers Platform Subscribers |
| |
| +--------+ +----------+ +--------+ |
| | Org A |---(publish)----->| |<---(subscribe)---| Org X | |
| +--------+ | | +--------+ |
| | HDX | |
| +--------+ | IATI | +--------+ |
| | Org B |---(publish)----->| Registry |<---(subscribe)---| Org Y | |
| +--------+ | | +--------+ |
| +----------+ |
| +--------+ | +--------+ |
| | Org C |---(publish)----------+ | Org Z | |
| +--------+ +--------+ |
| |
| Characteristics: |
| - Decoupled: publishers and subscribers independent |
| - Scalable: many-to-many without coordination |
| - Asynchronous: latency between publish and consume |
| - Platform-dependent: requires shared intermediary |
| |
+--------------------------------------------------------------------------+

The bilateral exchange pattern supports direct data sharing between specific partners. Two organisations establish an agreement, define a data format, and implement a transfer mechanism. This pattern provides control over exactly what data is shared with whom, and enables sensitive data exchange where platform publication is inappropriate. However, bilateral exchanges multiply complexity as partnership networks grow. Ten organisations with bilateral relationships require 45 separate exchange agreements.

The federated query pattern enables querying across distributed data sources without centralising data. Each participating organisation exposes a query interface following a common protocol. A coordination layer routes queries to relevant sources and aggregates results. This pattern preserves data sovereignty because each organisation retains control of their data while participating in collective analysis. Implementation complexity is high, requiring standardised query interfaces and reliable network connectivity between participants.

The data mesh pattern extends federation concepts by treating data as a product with clear ownership, defined interfaces, and quality guarantees. Each domain within an organisation (or within a coordination structure) maintains responsibility for their data products. A discovery layer enables finding available data products. Standardised interfaces enable consumption. This pattern suits large-scale coordination scenarios with mature data capabilities across participants.

For most humanitarian organisations, the publish-subscribe pattern through HDX covers external interoperability needs, while bilateral exchanges handle sensitive partner data sharing. Federated and mesh patterns apply to major coordination bodies managing information across many participants.

Data exchange formats

Humanitarian data exchange uses standard formats chosen for balance between human readability, machine processability, and tool compatibility.

CSV (Comma-Separated Values) remains the most common format for tabular humanitarian data. CSV files open in spreadsheet software, process easily in scripting languages, and transfer efficiently. HXL tagging preserves CSV compatibility while adding machine-readable semantics. When producing CSV for exchange, UTF-8 encoding with BOM ensures correct character handling across platforms. Date formats should follow ISO 8601 (YYYY-MM-DD) to avoid ambiguity.

JSON (JavaScript Object Notation) suits hierarchical data and API responses. JSON handles nested structures naturally, making it appropriate for complex records like IATI activities or geospatial features. Most programming languages include native JSON parsing. JSON files are less accessible to non-technical users than CSV but integrate more easily with modern web applications.

XML (Extensible Markup Language) provides the formal structure for standards like IATI where schema validation matters. XML’s verbosity makes it less efficient for large datasets but supports complex validation rules and transformation through XSLT. IATI specifically requires XML for publishing.

GeoJSON extends JSON for geographic data, representing points, lines, polygons, and feature collections with associated properties. GeoJSON integrates directly with web mapping libraries and GIS software. Administrative boundary CODs distribute in GeoJSON alongside shapefiles.

Shapefiles remain standard for GIS data despite technical limitations. The format actually comprises multiple files (.shp, .dbf, .shx, .prj) that must travel together. Shapefiles impose attribute name length limits (10 characters) and lack native support for Unicode. Despite these constraints, widespread GIS software support maintains shapefile prevalence for boundary and facility data.

Exchange format selection depends on use case:

Use caseRecommended formatRationale
Tabular operational data (3W, distributions)CSV with HXLSpreadsheet compatibility, tagging support
Aid activity publishingIATI XMLStandard requirement
API data transferJSONNative parsing, nested structure support
Geographic boundariesGeoJSON + ShapefileWeb mapping and GIS compatibility
Geographic points (facilities)CSV with coordinates or GeoJSONBalance of accessibility and precision

Identifier standards

Consistent identification enables linking data across sources. Humanitarian operations use several identifier standards for different entity types.

ISO 3166 country codes provide standardised country identification. ISO 3166-1 alpha-2 codes (UG, KE, ET) appear most frequently in humanitarian data. ISO 3166-1 alpha-3 codes (UGA, KEN, ETH) provide additional clarity. ISO 3166-2 codes extend to subdivisions, though humanitarian operations typically use P-codes rather than ISO 3166-2 for subnational identification.

P-codes (place codes) uniquely identify administrative units within humanitarian operations. P-code assignment follows hierarchical principles where each level incorporates parent codes. P-codes come from administrative boundary CODs and must not be invented locally. When operations span areas without established P-codes, coordination with OCHA establishes authoritative codes.

Organisation identifiers use several schemes. The IATI organisation identifier scheme combines a registration agency code with the organisation’s registration number. “GB-CHC-123456” identifies a UK registered charity. “XM-DAC-12-1” identifies a DAC-member bilateral donor. These identifiers enable unambiguous organisation reference across IATI data.

Sector codes classify activities by type. The DAC sector codes (vocabulary 1 in IATI) provide standardised classification, with codes like 14030 for basic drinking water supply or 72010 for material relief assistance. OCHA humanitarian sector codes (vocabulary 10) align with cluster categories: WASH, Health, Protection, Education, Food Security, and others.

+-------------------------------------------------------------------------+
| IDENTIFIER MAPPING EXAMPLE |
+-------------------------------------------------------------------------+
| |
| Organisation activity in three systems: |
| |
| INTERNAL SYSTEM: |
| Project ID: PRJ-2024-0042 |
| Location: "Gulu District, Northern Uganda" |
| Sector: "Water and Sanitation" |
| |
| IATI PUBLICATION: |
| iati-identifier: GB-CHC-123456-PRJ-2024-0042 |
| recipient-country: UG (ISO 3166-1 alpha-2) |
| location/administrative: UG301 (P-code) |
| sector: 14030 (DAC vocabulary) |
| |
| 3W SUBMISSION: |
| #activity+code: PRJ-2024-0042 |
| #country+code: UG |
| #adm1+code: UG3 |
| #adm2+code: UG301 |
| #sector+cluster: WASH |
| |
| The internal ID remains stable; external identifiers follow standards |
| |
+-------------------------------------------------------------------------+

Implementation considerations

For organisations with limited IT capacity

Organisations operating without dedicated data staff can implement interoperability incrementally. The first priority is consuming standard reference data. Downloading administrative boundary CODs and using P-codes in programme data enables future interoperability even before active publishing begins. Store P-codes alongside location names rather than relying solely on names that vary in spelling and language.

HXL tagging integrates with existing spreadsheet workflows. Adding a tag row to Excel templates costs nothing and enables future automated processing. KoboToolbox forms can include HXL tags in form definitions, producing tagged exports without post-processing.

HDX consumption requires only basic HTTP requests or manual downloads. Start by identifying the CODs for your operating countries and establishing a process to check for updates quarterly.

IATI publishing can begin manually for organisations with small activity portfolios. The IATI publishing tool at iatistandard.org provides a web interface for creating activity records without technical infrastructure. Convert to automated publishing when activity volumes justify the implementation investment.

For organisations with established data functions

Organisations with data teams should embed interoperability standards in system architecture rather than treating them as export transformations. Design databases with P-code foreign keys to administrative tables sourced from CODs. Include HXL tag mappings in data dictionary documentation. Build IATI generation into grants management system workflows.

Automated COD updates ensure reference data currency. A scheduled process checks HDX for COD version changes and triggers update workflows when new versions appear. Version control for COD imports maintains history and enables rollback if issues emerge.

API-based integrations with HDX and IATI registries enable real-time interoperability for coordination data. Publishing operational presence (3W) data weekly or daily through automated pipelines keeps coordination platforms current without manual intervention.

Investment in identifier mapping infrastructure pays dividends across integration efforts. A master identifier mapping service translates between internal codes and external standards (P-codes, ISO codes, DAC sectors), providing consistent translation across all systems requiring external references.

Common implementation challenges

P-code changes occur when administrative boundaries are redrawn. When a district splits into two new districts, existing data linked to the old P-code needs handling. Options include maintaining the old P-code with a deprecated flag, mapping old to new codes in a crosswalk table, or accepting that historical data uses historical boundaries. Document the approach and apply it consistently.

HXL vocabulary gaps appear when data concepts lack standard tags. The HXL working group maintains the vocabulary and accepts proposals for new tags through a governance process. In the interim, organisations can use the +v_ attribute prefix for custom vocabulary extensions, though these provide limited interoperability benefit.

IATI quality requirements can challenge organisations with informal data management. IATI validators check for completeness, valid code usage, and schema compliance. Failed validation prevents registry publication. Building validation into publishing workflows catches issues before submission.

Coordination platform requirements vary between contexts. Different humanitarian coordination structures may require data submissions in specific formats or through specific systems. Clarify requirements with cluster or sector leads early and design export capabilities accordingly.

Interoperability governance

Maintaining interoperability requires ongoing governance addressing standard adoption, data quality, and organisational responsibilities.

Standard adoption decisions determine which interoperability standards an organisation commits to implementing. Adoption creates obligations: publishing IATI data requires maintaining publication currency; using P-codes requires tracking COD updates. Governance bodies should explicitly approve standard adoption with understanding of ongoing requirements.

Data quality for shared data may require higher standards than internal data. External consumers lack context that internal users possess. Published datasets should include metadata about source, methodology, limitations, and contact information. Quality monitoring should track published data separately from internal data.

Responsibility assignment clarifies who maintains interoperability infrastructure, monitors external standard changes, and responds to data quality issues raised by external consumers. In small organisations, these responsibilities may fall to whoever manages programme data. Larger organisations benefit from explicit assignment, potentially to a data team or information management function.

External engagement with humanitarian information management coordination keeps organisations informed about standard evolution and best practices. Participation in cluster information management working groups, HXL working group discussions, and IATI community activities builds capability and influences standard development.

See also