Skip to main content

Application Integration

Application integration encompasses the patterns, technologies, and architectural decisions that enable discrete software systems to exchange data and coordinate actions. A mission-driven organisation running separate systems for finance, grants management, programme delivery, and constituent relationships faces a fundamental challenge: these systems hold interdependent data that must remain consistent, yet each system operates with its own data model, interface conventions, and operational rhythms. Integration bridges these gaps through standardised exchange mechanisms that transform, route, and synchronise information across application boundaries.

The complexity of integration scales non-linearly with the number of connected systems. Two applications require one integration. Three applications require three integrations if fully connected. Ten applications require forty-five. This combinatorial growth explains why integration architecture matters more than individual integration implementations. Architectural decisions made early constrain or enable future connectivity, determine operational overhead, and establish the organisation’s capacity to adopt new systems without destabilising existing data flows.

Integration
The connection between two or more software systems enabling data exchange or coordinated action. Each integration has a source, target, transformation logic, and trigger mechanism.
API (Application Programming Interface)
A defined contract specifying how software components interact. REST APIs use HTTP methods (GET, POST, PUT, DELETE) against resource endpoints. SOAP APIs use XML messages against service endpoints.
Middleware
Software that sits between applications to mediate interactions. Middleware handles message routing, transformation, protocol translation, and often provides monitoring and error handling.
iPaaS (Integration Platform as a Service)
Cloud-hosted middleware providing integration capabilities without infrastructure management. Examples include Workato, Tray.io, and Microsoft Power Automate.
ESB (Enterprise Service Bus)
A middleware architecture pattern using a central bus for message routing between services. The bus handles transformation, routing rules, and protocol mediation.
Webhook
An HTTP callback triggered by an event in a source system. When the event occurs, the source sends an HTTP POST to a configured URL on the target system.
ETL (Extract, Transform, Load)
A batch integration pattern that extracts data from sources, transforms it to match target requirements, and loads it into destination systems. Typically scheduled rather than real-time.
Idempotency
The property of an operation producing the same result regardless of how many times it executes. Idempotent integrations can safely retry failed operations without creating duplicate records.

Integration Patterns

Integration patterns describe the structural approaches to connecting systems. Each pattern embodies trade-offs between complexity, flexibility, operational overhead, and failure characteristics. Understanding these patterns enables informed architectural decisions rather than ad-hoc connection building.

Point-to-Point Integration

Point-to-point integration connects two systems directly without intermediary infrastructure. System A calls System B’s API, transforms the response, and processes the result. This pattern requires the least infrastructure and provides the fastest path to initial connectivity. A finance system pushing approved payments directly to a banking API exemplifies point-to-point integration.

The simplicity of point-to-point integration degrades as connections multiply. Each integration carries its own transformation logic, error handling, authentication management, and monitoring. With five systems fully interconnected via point-to-point, the organisation maintains ten separate integrations. At ten systems, forty-five integrations. Each integration represents operational burden: credentials to rotate, transformations to update when either system changes, failures to monitor and resolve.

+-------------+ +-------------+
| | | |
| Finance +-------------------------->+ Banking |
| System | Direct API Call | API |
| | | |
+------+------+ +-------------+
|
|
v
+------+------+
| |
| Grants |<--+
| System | |
| | |
+------+------+ |
| |
| |
v |
+------+------+ |
| | |
| Programme +---+
| System |
| |
+-------------+
Figure 1: Point-to-point integration creates direct connections between each system pair

Point-to-point integration suits scenarios with limited system count (under five primary business systems), stable requirements unlikely to require frequent integration changes, and sufficient development capacity to maintain individual integrations. Organisations lacking dedicated integration specialists find point-to-point initially accessible but increasingly burdensome as systems accumulate.

Hub-and-Spoke Integration

Hub-and-spoke integration centralises all data exchange through a middleware hub. Individual systems connect only to the hub, which handles routing, transformation, and delivery to target systems. Adding a new system requires one connection to the hub rather than connections to all existing systems.

The hub provides centralised visibility into all data flows. Administrators monitor integration health, message volumes, and error rates from a single interface. Transformation logic lives in the hub, not scattered across individual systems. When a system’s data model changes, updates occur in the hub’s transformation layer rather than in every connected system.

+------------------+
| |
| INTEGRATION |
| HUB |
| |
| - Routing |
| - Transform |
| - Monitor |
| |
+--------+---------+
|
+-----------------------------+-----------------------------+
| | | | |
v v v v v
+--------+---+ +-------+----+ +------+-----+ +-----+------+ +----+-------+
| | | | | | | | | |
| Finance | | Grants | | CRM | | Programme | | HR |
| System | | System | | System | | System | | System |
| | | | | | | | | |
+------------+ +------------+ +------------+ +------------+ +------------+
Figure 2: Hub-and-spoke architecture routes all integrations through central middleware

The hub introduces a single point of failure. Hub unavailability halts all integrations simultaneously. High-availability deployment of the hub (clustered instances, geographic redundancy) mitigates this risk but adds infrastructure complexity. Hub-and-spoke also creates a bottleneck: all messages traverse the hub regardless of volume, requiring capacity planning as integration traffic grows.

Operational overhead concentrates in the hub. The organisation needs staff capable of maintaining middleware infrastructure (for self-hosted) or managing the iPaaS platform (for cloud-hosted). This centralisation benefits organisations with dedicated integration capacity; it burdens organisations without such capacity by creating a critical system requiring specialist knowledge.

Event-Driven Integration

Event-driven integration decouples systems through asynchronous messaging. When something occurs in a source system (a grant is approved, a beneficiary is registered, a payment fails), the system publishes an event to a message broker. Interested systems subscribe to relevant event types and process events as they arrive. Publishers and subscribers operate independently; neither knows about the other’s existence.

+-------------+ +------------------+ +-------------+
| | | | | |
| Finance +---->+ MESSAGE +---->+ Grants |
| System | | BROKER | | System |
| | | | | |
| Publishes: | | - payment.made | | Subscribes: |
| payment.made| | - grant.approved| | payment.made|
| | | - user.created | | |
+-------------+ | | +-------------+
| |
+-------------+ | | +-------------+
| | | | | |
| Grants +---->+ +---->+ Programme |
| System | | | | System |
| | | | | |
| Publishes: | | | | Subscribes: |
|grant.approved | | |grant.approved
| | +------------------+ | |
+-------------+ +-------------+
Figure 3: Event-driven architecture decouples publishers from subscribers through message broker

Event-driven integration excels at temporal decoupling. A source system publishes events without waiting for subscribers to process them. If the grants system is offline for maintenance, finance continues publishing payment events. The message broker queues events until the grants system reconnects and processes the backlog. This resilience suits organisations with systems operating across unreliable network connections or different maintenance windows.

The pattern introduces eventual consistency. After a finance system publishes a payment event, some delay exists before the grants system reflects that payment. For most business processes, sub-minute delays are acceptable. For processes requiring immediate consistency (verifying payment before releasing goods), event-driven integration requires additional synchronisation mechanisms.

Event-driven architecture requires investment in event schema design. Events must carry sufficient information for subscribers to act without calling back to the source system. Poor schema design leads to “chatty” integrations where subscribers make additional API calls to supplement sparse events, negating the decoupling benefits.

Pattern Selection

Pattern selection depends on system count, change frequency, team capacity, and consistency requirements. The following matrix provides selection guidance based on organisational context:

ContextRecommended PatternRationale
Under 5 systems, stable requirementsPoint-to-pointMinimal infrastructure, direct debugging
5-15 systems, mixed change frequencyHub-and-spokeCentralised management justifies overhead
Over 15 systems or high change rateEvent-drivenDecoupling essential at scale
Real-time consistency requiredPoint-to-point or hub-and-spokeEvents introduce latency
Unreliable connectivity between systemsEvent-drivenQueuing handles intermittent availability
No dedicated integration staffiPaaS hub-and-spokeManaged infrastructure reduces burden

Hybrid approaches combine patterns. Core financial integrations might use point-to-point for their simplicity and auditability, while programme systems use event-driven patterns for flexibility. The architectural principle is intentional selection rather than pattern accumulation through tactical decisions.

Integration Architecture

Integration architecture describes how integration components combine into a functioning whole. Architecture encompasses the integration platform, connectivity to source and target systems, transformation layer, and operational infrastructure for monitoring and error handling.

Connectivity Layers

Systems expose integration capabilities through varied interfaces. Modern SaaS applications typically provide REST APIs with JSON payloads. Legacy on-premises systems might offer database-level access, file-based exchange, or SOAP web services. Some systems lack APIs entirely, requiring screen scraping or manual export/import.

+-----------------------------------------------------------------+
| INTEGRATION ARCHITECTURE |
+-----------------------------------------------------------------+
| |
| +---------------------------+ +---------------------------+ |
| | API CONNECTIVITY | | NON-API CONNECTIVITY | |
| | | | | |
| | REST API | | Database | |
| | +--------+ +--------+ | | +--------+ | |
| | | OAuth | | API | | | | JDBC/ | Direct | |
| | | Token |-->| Call | | | | ODBC | Query | |
| | +--------+ +--------+ | | +--------+ | |
| | | | | |
| | SOAP API | | File Exchange | |
| | +--------+ +--------+ | | +--------+ +--------+ | |
| | | WS-Sec | | SOAP | | | | SFTP | | File | | |
| | | Token |-->| Call | | | | Pick |-->| Parse | | |
| | +--------+ +--------+ | | +--------+ +--------+ | |
| | | | | |
| +---------------------------+ +---------------------------+ |
| |
| +----------------------------------------------------------+ |
| | TRANSFORMATION LAYER | |
| | | |
| | Source Format Mapping Rules Target Format | |
| | +-----------+ +-----------+ +-----------+ | |
| | | JSON/XML |--->| Field Map |--->| JSON/XML | | |
| | | Schema | | Transform | | Schema | | |
| | +-----------+ | Validate | +-----------+ | |
| | +-----------+ | |
| +----------------------------------------------------------+ |
| |
| +----------------------------------------------------------+ |
| | OPERATIONAL LAYER | |
| | | |
| | +-------------+ +-------------+ +-------------+ | |
| | | Logging | | Monitoring | | Alerting | | |
| | +-------------+ +-------------+ +-------------+ | |
| | | |
| +----------------------------------------------------------+ |
+-----------------------------------------------------------------+
Figure 4: Integration architecture layers from connectivity through operations

The connectivity layer abstracts interface differences. An integration platform presents a uniform internal interface regardless of whether the underlying system uses REST, SOAP, database, or file exchange. This abstraction enables the transformation layer to operate consistently across connectivity types.

Authentication mechanisms vary by connectivity type. REST APIs commonly use OAuth 2.0 bearer tokens or API keys. SOAP services use WS-Security tokens or basic authentication. Database connections use service accounts with connection strings. File exchange uses SFTP with SSH keys or username/password. The integration platform must manage credentials for all connectivity types, rotating them according to security policy.

Transformation Logic

Transformation converts data from source format to target format. Simple transformations map fields directly: source field first_name becomes target field given_name. Complex transformations derive values, aggregate data, apply business rules, or restructure hierarchies.

Consider a grant management system that must update a finance system when grant budgets change. The grants system stores budgets by fiscal year with separate fields for each budget category. The finance system expects a single budget record per cost centre with category breakdowns in child records.

Source format (grants system):

{
"grant_id": "GRT-2024-001",
"fy2024_personnel": 150000,
"fy2024_travel": 25000,
"fy2024_supplies": 15000,
"fy2025_personnel": 175000,
"fy2025_travel": 30000,
"fy2025_supplies": 18000
}

Target format (finance system):

{
"cost_centre": "CC-GRT-2024-001",
"fiscal_year": "2024",
"total_budget": 190000,
"line_items": [
{"category": "10-PERS", "amount": 150000},
{"category": "20-TRVL", "amount": 25000},
{"category": "30-SUPP", "amount": 15000}
]
}

The transformation must restructure flat fields into a parent-child hierarchy, calculate derived totals, map category names to finance system codes, and generate the integration once per fiscal year represented in the source. This transformation logic encodes business rules: the category code mapping, the cost centre naming convention, and the fiscal year extraction.

Transformation complexity determines integration maintenance burden. Simple field mappings rarely require updates. Complex transformations with embedded business rules require updates whenever those rules change in either source or target system. Organisations benefit from documenting transformation logic in business terms, not just technical specifications, so non-technical staff can validate rules during system changes.

Data Quality and Validation

Integration transforms data but cannot create quality that does not exist in sources. Integrations should validate incoming data against expected schemas and business rules before transformation. Invalid data should fail early with clear error messages rather than propagating problems downstream.

Validation operates at multiple levels. Schema validation confirms data types, required fields, and structural correctness. Business validation confirms values fall within expected ranges, references resolve to existing records, and combinations are logically consistent. A beneficiary registration integration might validate that birth dates are not in the future, that household sizes match member counts, and that geographic codes exist in the reference data.

+-------------------------------------------------------------------+
| VALIDATION PIPELINE |
+-------------------------------------------------------------------+
| |
| Incoming Schema Business Transform |
| Data Validation Validation and Route |
| |
| +--------+ +--------+ +--------+ +--------+ |
| | | | | | | | | |
| | Source +------>+ Type +------>+ Range +----->+ Map | |
| | Payload| | Check | | Check | | Fields | |
| | | | Struct | | Ref | | Calc | |
| +--------+ +---+----+ +---+----+ +--------+ |
| | | |
| v v |
| +----+----------------+----+ |
| | | |
| | ERROR QUEUE | |
| | (for review) | |
| | | |
| +--------------------------+ |
| |
+-------------------------------------------------------------------+
Figure 5: Validation pipeline catches errors before transformation

Failed validations require disposition. Some failures represent transient issues (a referenced record not yet created) and can retry automatically. Others represent data quality problems requiring human review. Integration platforms should support configurable error handling: automatic retry with backoff, routing to error queues for manual review, and alerting thresholds for systematic failures.

Platform Options

Integration platforms range from code-based custom development through low-code middleware to fully managed iPaaS services. Selection depends on integration complexity, team skills, operational capacity, and budget constraints.

Open Source Middleware

Self-hosted open source platforms provide full control over integration infrastructure without licensing costs. Operational overhead is substantial: organisations must provision infrastructure, manage updates, ensure high availability, and develop internal expertise.

Apache Camel provides an integration framework with connectors for hundreds of systems. Developers define integration routes in code (Java, XML, or YAML) specifying source, transformation, and target. Camel runs within application servers or as standalone services. The framework handles connection management, error handling, and retry logic. Deployment typically uses container orchestration (Kubernetes) for scalability and resilience.

n8n offers a visual workflow builder for integration automation. Users create integrations by connecting nodes representing triggers, actions, and logic. n8n includes connectors for common SaaS applications and supports custom nodes for systems lacking built-in support. Self-hosted deployment uses Docker or Kubernetes. n8n’s visual approach suits organisations with technically capable staff who lack deep programming skills.

Apache Kafka provides event streaming infrastructure for event-driven architectures. Kafka handles high-volume message ingestion, durable storage, and delivery to consumers. Organisations building event-driven integration typically combine Kafka with a processing framework (Kafka Streams, Apache Flink) for transformation logic. Kafka requires significant operational expertise: cluster management, partition planning, and consumer group coordination.

PlatformStrengthsConsiderationsTypical Deployment
Apache CamelExtensive connectors, mature ecosystemRequires Java expertiseKubernetes, VMs
n8nVisual builder, accessible to non-developersLimited complex transformationDocker, Kubernetes
Apache KafkaHigh throughput, durable messagingOperational complexityKubernetes cluster
Mule CEFull ESB capabilitiesSteep learning curveVMs, containers

Commercial iPaaS

Integration Platform as a Service offerings provide managed infrastructure with visual development environments. Organisations pay subscription fees (typically per user, per connection, or per transaction volume) rather than managing infrastructure. iPaaS suits organisations prioritising speed of delivery and operational simplicity over customisation and cost control.

Workato provides enterprise integration with strong governance features. Visual recipe builder supports complex logic including conditional branching, loops, and error handling. Workato includes connectors for major business systems (Salesforce, NetSuite, Workday) and humanitarian platforms (Salesforce NPSP, custom REST APIs). Pricing scales with recipe count and transaction volume; nonprofit pricing programmes reduce costs.

Microsoft Power Automate integrates deeply with Microsoft 365 and Dynamics 365. Organisations already using Microsoft’s ecosystem gain integration capabilities without additional vendor relationships. Power Automate’s low-code approach enables business users to build simple integrations; complex scenarios require developer involvement. Licensing through Microsoft 365 plans provides baseline capability; premium connectors require additional licensing.

Tray.io offers a flexible platform with strong API connectivity. Organisations with custom-built systems or less common applications find Tray’s universal HTTP connector valuable. Tray supports complex data transformations through a visual mapper with formula capabilities. Pricing is consumption-based; high-volume integrations can become expensive.

Zapier provides accessible automation for simple use cases. The platform excels at connecting popular SaaS applications through predefined triggers and actions. Complex transformations exceed Zapier’s capabilities, and high-volume integrations (over 1,000 tasks per month) become cost-prohibitive. Zapier suits organisations connecting standard SaaS tools with straightforward requirements.

Selection Considerations

Platform selection requires matching capabilities to requirements and constraints. Integration volume, complexity, team skills, and budget interact to determine appropriate choices.

Small organisations (under 100 staff) with five or fewer integrated systems and limited IT capacity benefit from iPaaS solutions despite higher per-integration cost. The managed infrastructure eliminates operational burden; visual builders enable delivery without dedicated developers. Zapier or Power Automate (if already in Microsoft ecosystem) provides entry-level capability. As integration needs grow, migration to Workato or Tray.io preserves the managed model with greater sophistication.

Medium organisations (100-500 staff) with dedicated IT functions face genuine trade-offs. iPaaS subscription costs for 15-20 integrations can reach £30,000-50,000 annually. Self-hosted open source (n8n, Apache Camel) eliminates subscription costs but requires infrastructure and expertise. Organisations with existing containerisation capability (Kubernetes clusters) absorb open source platforms into existing operational patterns. Those without such capability face infrastructure build-out alongside integration delivery.

Large organisations or consortia with complex multi-system landscapes benefit from investment in robust integration architecture. Event-driven patterns using Apache Kafka enable scale and flexibility. Dedicated integration teams develop expertise that amortises across many integrations. Open source platforms provide cost efficiency at scale; commercial support contracts (available for Kafka, Camel, and others) provide vendor backing without iPaaS pricing models.

Error Handling and Monitoring

Integration reliability depends on anticipating and managing failures. Networks disconnect, APIs timeout, authentication expires, data validation fails, and target systems reject updates. Robust error handling transforms these inevitable failures from crises into manageable operational events.

Error Categories

Integration errors fall into categories requiring different responses. Transient errors are temporary conditions likely to resolve without intervention: network timeouts, rate limiting, temporary service unavailability. Permanent errors represent fundamental problems requiring investigation: authentication failures, schema mismatches, business rule violations.

Transient error handling uses retry with exponential backoff. After initial failure, the integration waits before retrying: 1 second, then 2 seconds, then 4 seconds, increasing to a maximum interval (typically 5-10 minutes). This pattern handles brief outages without overwhelming recovering systems with immediate retry storms. After a configured retry count (commonly 5-10 attempts over 1-2 hours), the integration escalates to permanent error handling.

+------------------------------------------------------------------+
| ERROR HANDLING FLOW |
+------------------------------------------------------------------+
| |
| Integration Error Retry with |
| Attempt Detection Backoff |
| |
| +--------+ +--------+ +--------+ |
| | | | | | | |
| | Execute+-------->+ Catch +-------->+ Wait | |
| | Step | | Error | | N sec | |
| | | | | | | |
| +--------+ +---+----+ +---+----+ |
| | | |
| +------------+ | |
| | | |
| v v |
| +----+----+ +----+----+ |
| | | | | |
| |Permanent| |Retry | |
| |Error? | |Count | |
| | | |Exceeded?| |
| +----+----+ +----+----+ |
| | | |
| +-------+-------+ +-------+-------+ |
| | | | | |
| v v v v |
| +----+----+ +----+----+ +----+----+ +----+----+ |
| | Error | | Retry | | Error | | Retry | |
| | Queue | | (N*2) | | Queue | | Step | |
| +---------+ +---------+ +---------+ +---------+ |
| |
+------------------------------------------------------------------+
Figure 6: Error handling distinguishes transient from permanent failures

Permanent error handling routes failed records to error queues for investigation. Error queues capture the original payload, transformation state at failure, error message, and timestamp. Staff review queued errors, diagnose root causes, and either fix data and resubmit or mark records as requiring manual processing outside the integration.

Idempotency Design

Idempotent integrations produce identical results regardless of execution count. If an integration processes the same record twice (due to retry, duplicate message delivery, or operational re-run), the target system ends with correct data rather than duplicates or compounding updates.

Idempotency requires unique identification and upsert logic. Each integration record carries a unique identifier from the source system. Target operations use “upsert” semantics: insert if new, update if existing. The finance system receiving grant budget updates stores the grant ID as a unique key; subsequent updates for the same grant ID modify the existing record rather than creating duplicates.

Consider an integration synchronising staff records from HR to the identity system. Without idempotency, a retry after timeout might create duplicate user accounts. With idempotency, the integration checks whether a user with the employee ID exists; if so, it updates the existing user rather than creating another.

Idempotency design must consider partial failures. An integration creating a grant record with child budget line items might succeed on the parent, fail on children, and retry the entire operation. The retry must not duplicate the parent while still creating missing children. Composite operations require transaction-like semantics or careful idempotency at each level.

Monitoring and Alerting

Integration monitoring tracks execution status, data volumes, timing, and error rates. Effective monitoring distinguishes normal operation from problems requiring attention without generating alert fatigue from routine variations.

Key metrics for integration monitoring include:

Execution success rate measures the percentage of integration runs completing successfully. A finance-to-banking integration running hourly should maintain 99%+ success. Brief rate drops during target system maintenance are expected; sustained drops indicate problems.

Message throughput tracks records processed per time period. An HR-to-identity sync processing 50 records daily should alert if volume drops to zero (source system issue) or spikes to 5,000 (unusual bulk operation or possible data quality problem).

Processing latency measures time from trigger to completion. An event-driven integration typically completes in seconds; gradual latency increase suggests capacity problems or target system degradation.

Error queue depth indicates accumulated problems awaiting resolution. A growing queue despite consistent processing signals either increasing error rate or insufficient attention to error resolution.

+------------------------------------------------------------------+
| MONITORING DASHBOARD |
+------------------------------------------------------------------+
| |
| Integration Health (last 24h) |
| +------------------------------------------------------------+ |
| | Integration | Success | Volume | Latency | Queue | |
| |-----------------------|---------|--------|---------|-------| |
| | HR -> Identity | 98.5% | 127 | 2.3s | 3 | |
| | Finance -> Banking | 100.0% | 24 | 4.1s | 0 | |
| | Grants -> Finance | 95.2% | 89 | 8.7s | 12 | |
| | CRM -> Email Platform | 99.8% | 1,847 | 1.2s | 2 | |
| +------------------------------------------------------------+ |
| |
| Alert Thresholds |
| - Success rate < 95% sustained 1h: Warning |
| - Success rate < 90% sustained 15m: Critical |
| - Error queue > 50 items: Warning |
| - Error queue > 100 items: Critical |
| - Latency > 5x baseline sustained 30m: Warning |
| |
+------------------------------------------------------------------+
Figure 7: Monitoring dashboard tracks integration health metrics

Alert thresholds require calibration to integration criticality and normal variation. A payments integration demands tighter thresholds than a monthly reporting sync. Initial thresholds based on estimates should adjust based on operational experience. Alerts should route to staff capable of investigation and response; routing all alerts to a general inbox ensures none receive attention.

Security in Integration

Integrations create pathways between systems that security controls must protect. Each integration represents potential attack surface: credentials that could be stolen, data flows that could be intercepted, and access paths that could be exploited.

Authentication and Authorisation

Integration authentication verifies that the connecting system is who it claims to be. Authorisation limits what authenticated integrations can access. Both requirements apply to every integration regardless of whether systems are internal or external.

API integrations should use OAuth 2.0 client credentials flow where supported. The integration obtains short-lived access tokens using a client ID and secret; tokens expire and must be refreshed. This approach limits exposure from credential theft: stolen tokens expire quickly, while stolen client secrets require both ID and secret to be useful.

Where OAuth is unavailable, API keys provide simpler authentication. Keys should be unique per integration (not shared across integrations), rotated on schedule (90 days maximum), and transmitted only in headers (never in URLs where they appear in logs).

Database integrations require service accounts with minimum necessary privileges. An integration reading staff records needs SELECT permission on the staff table, not database administrator access. Write integrations should use accounts limited to specific tables and operations.

Credentials must never appear in code, configuration files checked into version control, or logs. Integration platforms should store credentials in secrets management systems (HashiCorp Vault, Azure Key Vault, AWS Secrets Manager) or platform-provided credential stores. Credential rotation must update the secrets store without requiring integration code changes.

Data Protection in Transit

All integration data flows must use encrypted transport. REST and SOAP APIs use HTTPS with TLS 1.2 or later. Database connections use TLS-encrypted connections. File transfers use SFTP, not FTP.

Certificate validation must remain enabled. Disabling certificate validation (a common troubleshooting shortcut) enables man-in-the-middle attacks. Integration platforms should fail on invalid certificates; resolving certificate issues requires proper certificate installation, not validation bypass.

Integrations processing sensitive data (personal data, financial data, protection case information) should evaluate whether additional encryption is appropriate. Transport encryption protects data in transit; it does not protect data at rest in integration platforms, logs, or error queues. Sensitive field values may warrant application-level encryption before transmission.

Access to Integration Platforms

The integration platform itself requires access control. Staff who can modify integrations can redirect data flows, exfiltrate data through custom integrations, or disrupt operations through misconfiguration.

Administrative access to integration platforms should use named accounts (not shared credentials), require multi-factor authentication, and follow least-privilege principles. Separating roles enables control: developers create and modify integrations, operators monitor and restart, administrators manage platform configuration and access.

Audit logging must capture all administrative actions: integration creation, modification, credential access, and execution history. Logs should be immutable (or integrity-protected) and retained according to organisational policy.

Master Data and Synchronisation

Master data elements appear across multiple systems: staff members exist in HR, identity, finance, and programme systems; grants exist in grants management, finance, and programme systems. Integration must maintain consistency of master data across systems while respecting each system’s role as source of truth for specific attributes.

Source of Truth Designation

Each master data attribute has one source of truth. Staff names originate in HR. Access permissions originate in identity management. Grant budgets originate in grants management. Financial transactions originate in finance. Integrations flow data from sources of truth to consuming systems; they do not create circular synchronisation where changes in any system propagate everywhere.

+------------------------------------------------------------------+
| MASTER DATA FLOW |
+------------------------------------------------------------------+
| |
| Source of Truth Consuming Systems |
| |
| +-------------+ +-------------+ |
| | | | | |
| | HR System +------------------->+ Identity | |
| | | Staff Name, | System | |
| | (Staff Core | Department, | | |
| | Attributes)| Start Date +-------------+ |
| | | | |
| +-------------+ | Username, |
| | Email |
| +-------------+ v |
| | | +------+------+ |
| | Identity +------------------->+ | |
| | System | User Account | Finance | |
| | | Details | System | |
| | (Access | | | |
| | Attributes)| +-------------+ |
| +-------------+ |
| |
| +-------------+ +-------------+ |
| | | | | |
| | Grants +------------------->+ Finance | |
| | System | Budget by | System | |
| | | Category | | |
| | (Grant | +------+------+ |
| | Budget) | | |
| +-------------+ | Expenditure |
| | Data |
| v |
| +------+------+ |
| | | |
| | Grants | |
| | System | |
| | | |
| +-------------+ |
| |
+------------------------------------------------------------------+
Figure 8: Master data flows from designated sources of truth to consuming systems

Source of truth designation requires organisational agreement, not just technical configuration. When HR and hiring managers disagree about whether job titles originate in HR system or are entered directly in the identity system, technical integration cannot resolve the conflict. Integration design should surface these decisions for business resolution.

Synchronisation Timing

Synchronisation timing determines how quickly changes in source systems reflect in consuming systems. Real-time synchronisation (triggered immediately by source changes) minimises latency but creates tight coupling and higher integration volume. Scheduled synchronisation (batch runs at intervals) introduces acceptable latency for reduced complexity.

Critical master data often warrants real-time synchronisation. When staff depart, identity system access should revoke within minutes, not wait for nightly batch. When grants receive approval, finance systems should reflect new budgets promptly for procurement activity.

Less critical data tolerates scheduled synchronisation. Staff cost centre changes might synchronise nightly without operational impact. Programme reporting data aggregations might run weekly aligned with reporting cycles.

Synchronisation schedules should align with business processes. Daily HR-to-identity sync running at 06:00 ensures new starters have accounts when they arrive at 09:00. Grant budget sync running immediately after finance system nightly close ensures programme staff see current figures each morning.

Conflict Resolution

Bidirectional synchronisation creates potential conflicts when the same attribute changes in multiple systems between sync cycles. Conflict resolution strategies determine which change prevails.

Last-writer-wins applies the most recent change regardless of source. This strategy suits scenarios where either system might legitimately update an attribute. It risks losing changes when near-simultaneous updates occur in different systems.

Source-of-truth-wins always applies the designated source system’s value, discarding conflicting changes from other systems. This strategy enforces data governance but may frustrate users who expect their changes to persist.

Conflict queuing flags conflicting changes for manual resolution. This strategy ensures no data loss but requires operational process to review and resolve conflicts. It suits high-value data where neither automatic strategy is acceptable.

Most integration scenarios should avoid bidirectional synchronisation by clearly designating sources of truth. Where bidirectional flows are necessary, conflict resolution strategy must be explicit and communicated to users of all involved systems.

Integration Governance

Integration governance establishes policies, processes, and oversight ensuring integrations remain secure, documented, maintained, and aligned with organisational needs. Without governance, integration landscapes accumulate undocumented connections, orphaned data flows, and security vulnerabilities.

Integration Inventory

An integration inventory documents all connections between systems. For each integration, the inventory records source and target systems, data exchanged, transformation logic location, schedule or trigger, error handling approach, and responsible owner.

The inventory enables impact analysis when systems change. Before upgrading the HR system, the organisation reviews all integrations sourcing from or targeting HR. Before decommissioning a legacy system, the organisation ensures all dependent integrations have been migrated or retired.

Inventory maintenance requires discipline. New integrations must be registered before deployment. Integration changes must update documentation. Regular audits verify inventory accuracy against actual deployed integrations. Orphaned integrations (deployed but undocumented) represent governance failures requiring investigation.

Change Management

Integration changes carry risk: modified transformations may produce incorrect data, new integrations may create unexpected load, and configuration errors may disrupt dependent processes. Change management applies structured review and approval to integration changes proportionate to their risk.

Low-risk changes (adjusting retry counts, adding logging) may require only peer review before deployment. Medium-risk changes (modifying field mappings, adding new target systems) require testing in non-production environments and approval from integration owners. High-risk changes (modifying integrations processing financial or sensitive data) require formal change requests with documented testing and rollback plans.

Deployment should separate environments: development for building and initial testing, staging for integration testing with realistic data volumes, production for live operation. Promotion between environments follows approval workflows. Production changes should occur during maintenance windows with monitoring active and rollback capability ready.

Lifecycle Management

Integrations have lifecycles: creation, operation, modification, and retirement. Governance addresses each phase.

Creation governance ensures new integrations meet architectural standards, use approved patterns and platforms, document their purpose and design, and receive security review before deployment. Bypassing creation governance leads to shadow integrations built by individuals without oversight.

Operational governance monitors integration health, investigates failures, and ensures error queues receive attention. Regular reviews assess whether integrations remain necessary and function correctly.

Retirement governance ensures integrations are decommissioned cleanly when no longer needed. Retired integrations should be disabled (not deleted) initially, monitored for unexpected failures indicating hidden dependencies, then removed after a confirmation period. Documentation should record retirement rationale and date.

Implementation Considerations

Integration implementation varies significantly based on organisational context. Approaches suitable for organisations with dedicated integration teams overwhelm those with limited IT capacity. Realistic assessment of capacity and constraints guides appropriate implementation paths.

For Organisations with Limited IT Capacity

Organisations without dedicated integration specialists should prioritise managed services and simplicity. iPaaS platforms (Power Automate for Microsoft-oriented organisations, Zapier for simpler needs, Workato for more complex requirements) eliminate infrastructure management burden. Visual builders enable staff with technical aptitude but without programming expertise to create and maintain integrations.

Start with highest-value integrations: typically HR-to-identity (ensuring access management accuracy), finance-to-banking (enabling payment processing), and CRM-to-email (enabling communications). Limit initial scope to essential data; comprehensive synchronisation adds complexity without proportionate benefit.

Avoid event-driven patterns initially. The operational overhead of message broker management exceeds capacity. Hub-and-spoke through iPaaS provides centralised management without infrastructure burden.

Accept manual processes where integration complexity exceeds capacity. Monthly export-transform-import workflows, while tedious, are better than unmaintained automated integrations. As capacity grows, automate manual processes incrementally.

Budget for iPaaS subscriptions as operational expense. Annual costs of £5,000-15,000 for small organisation needs are comparable to partial staff time maintaining self-hosted alternatives, with lower risk and more predictable effort.

For Organisations with Established IT Functions

Organisations with dedicated IT teams can consider self-hosted platforms for cost efficiency and control. Evaluate honestly whether the organisation has container orchestration capability (Kubernetes expertise) before selecting platforms requiring it. Apache Camel on Kubernetes provides power but demands skills; n8n on simple Docker deployment offers middle ground.

Invest in integration architecture before accumulating tactical point-to-point connections. Document target state architecture, select platforms, establish patterns, then implement integrations within the architectural framework. This investment pays returns as integration count grows past ten connections.

Consider event-driven architecture for organisations with diverse systems and frequent changes. Initial investment in message broker infrastructure (Kafka, RabbitMQ) enables loosely coupled integrations that adapt to change more gracefully than tightly coupled alternatives.

Establish integration centre of excellence practices: documented standards, reusable transformation libraries, monitoring dashboards, and on-call rotation for integration failures. These practices ensure consistency across integrations and sustainable operational support.

Balance build versus buy for each integration. Commercial iPaaS platforms provide pre-built connectors for common systems that would require significant development effort to replicate. Organisations might use iPaaS for CRM and finance integrations (leveraging vendor-maintained connectors) while building custom integrations for humanitarian programme systems on open source platforms.

For Field and Distributed Contexts

Organisations operating across unreliable network connections face additional integration challenges. Synchronous integrations fail when connectivity fails; operations halt waiting for timeouts to resolve.

Asynchronous patterns with local queueing improve resilience. Mobile data collection synchronises to field servers when connected; field servers synchronise to headquarters through store-and-forward patterns that tolerate connectivity gaps. Message brokers with persistence ensure no data loss during extended disconnection.

Edge processing reduces synchronisation volume. Rather than synchronising all records, edge systems process locally and synchronise summaries, changes, or flagged exceptions. A field distribution system might operate independently during distribution, then synchronise completion records and exceptions when connectivity allows.

Conflict resolution becomes critical with extended disconnection periods. Two field offices updating the same beneficiary record during a week-long connectivity outage create conflicts requiring resolution. Design data models to minimise conflict potential: append-only patterns where possible, clear ownership of record sections, and explicit conflict resolution rules for unavoidable overlap.

See Also