Skip to main content

Legacy System Integration

Legacy system integration encompasses the architectural patterns and techniques that enable modern applications to exchange data with older systems that lack contemporary interfaces. These patterns address the fundamental tension between preserving investments in functioning systems and enabling new capabilities that those systems cannot directly support. The choice of integration pattern determines maintenance burden, data consistency guarantees, and the organisation’s ability to eventually replace the legacy system.

Problem context

A system becomes legacy when its technical characteristics create friction with current development practices and operational requirements. This classification depends on context rather than absolute age. A 15-year-old mainframe running COBOL with well-documented batch interfaces presents different integration challenges than a 5-year-old application with proprietary protocols and no API documentation.

Legacy systems persist in mission-driven organisations for sound reasons. A case management system deployed a decade ago contains historical data essential for longitudinal analysis. A donor management platform holds relationship records accumulated over years of fundraising. A financial system implements grant accounting rules validated through multiple audits. Replacement costs exceed available budgets, and the systems continue to function for their original purpose even as new requirements emerge around them.

Integration becomes necessary when new applications require data held in legacy systems or when legacy systems need data generated by modern platforms. A new mobile data collection application needs beneficiary identifiers from a legacy registration database. A modern analytics platform requires transaction data from an ageing financial system. A cloud-based case management tool must synchronise with an on-premises system during a multi-year transition.

The forces that make legacy integration challenging include absent or incomplete documentation, proprietary data formats, limited connectivity options, vendor dependencies for modifications, and institutional knowledge concentrated in staff who have since departed. These constraints shape pattern selection more than the abstract characteristics of the patterns themselves.

Legacy integration patterns apply when outright replacement is infeasible within required timescales or budgets. They do not apply when the legacy system is scheduled for decommissioning within 12 months, when the integration scope exceeds 20% of the legacy system’s functionality, or when the legacy system’s data quality problems are severe enough to propagate errors into new systems. In these situations, data migration followed by decommissioning produces better outcomes than integration investment.

Solution

Legacy integration patterns share a common architectural principle: isolate the complexities of the legacy system behind a boundary that presents a stable, well-defined interface to modern consumers. This boundary absorbs changes on either side, allowing the legacy system and modern applications to evolve independently within their respective constraints.

Wrapper pattern

The wrapper pattern places an intermediary service between consumers and the legacy system. This wrapper translates modern API requests into whatever protocol the legacy system understands and converts legacy responses into structured data formats suitable for contemporary applications.

+---------------------------------------------------------------------+
| WRAPPER ARCHITECTURE |
+---------------------------------------------------------------------+
| |
| +------------------+ +------------------+ +--------------+ |
| | | | | | | |
| | Modern | | Wrapper | | Legacy | |
| | Application | | Service | | System | |
| | | | | | | |
| | - REST client +---->| - REST endpoint +---->| - SOAP API | |
| | - JSON payload | | - Protocol | | - XML format | |
| | - OAuth2 auth | | translation | | - Basic auth | |
| | |<----+ - Data mapping |<----+ | |
| | | | - Error handling | | | |
| +------------------+ +------------------+ +--------------+ |
| |
| Request flow: |
| 1. Modern app calls wrapper REST endpoint |
| 2. Wrapper transforms JSON to XML |
| 3. Wrapper calls legacy SOAP service |
| 4. Legacy returns XML response |
| 5. Wrapper transforms XML to JSON |
| 6. Modern app receives structured response |
| |
+---------------------------------------------------------------------+

The wrapper service owns the translation logic entirely. Modern applications depend only on the wrapper’s API contract, not on legacy system internals. When the legacy system changes its data format or authentication mechanism, only the wrapper requires modification. When modern applications need additional data transformations, the wrapper can provide them without legacy system changes.

A wrapper for a legacy donor database implements a REST API that accepts queries by donor identifier, date range, or giving history criteria. Internally, it connects to the legacy system’s ODBC interface, executes SQL queries against the proprietary schema, and transforms results into a standardised JSON structure. The wrapper handles connection pooling, query timeouts, and error translation, presenting consistent behaviour regardless of legacy system quirks.

+-------------------------------------------------------------------+
| WRAPPER DATA TRANSFORMATION |
+-------------------------------------------------------------------+
| |
| Legacy Schema (DONOR_MASTER table) Wrapper Response (JSON) |
| +----------------------------+ +------------------------+ |
| | DNR_ID | VARCHAR(12) | | { | |
| | DNR_NM | VARCHAR(50) | --> | "id": "D-000012345", | |
| | DNR_ADDR1 | VARCHAR(40) | | "name": "J Smith", | |
| | DNR_ADDR2 | VARCHAR(40) | | "address": { | |
| | DNR_CTY | VARCHAR(30) | | "line1": "...", | |
| | DNR_ST | CHAR(2) | | "city": "...", | |
| | DNR_ZIP | VARCHAR(10) | | "postcode": "..." | |
| | LST_GFT_DT| DATE | | }, | |
| | LST_GFT_AM| DECIMAL(12,2) | | "lastGift": { | |
| | TOT_GFT_AM| DECIMAL(14,2) | | "date": "2024-...",| |
| +----------------------------+ | "amount": 150.00 | |
| | }, | |
| - Cryptic column names | "totalGiving": 2500 | |
| - Implicit nulls | } | |
| - Date format YYYYMMDD +------------------------+ |
| - Currency in cents |
| - Semantic field names |
| - Explicit null handling |
| - ISO 8601 dates |
| - Decimal currency |
+-------------------------------------------------------------------+

Wrapper services introduce latency proportional to the translation complexity and network hops involved. A wrapper adding 50-100ms to each request remains acceptable for interactive applications making occasional queries. A wrapper adding the same latency to bulk operations processing 10,000 records transforms a 2-minute job into a 10-minute job. Bulk operations require batch-aware wrapper designs that amortise translation costs across record sets.

Anti-corruption layer pattern

The anti-corruption layer extends the wrapper concept to protect the domain model of modern applications from contamination by legacy system concepts. Where a simple wrapper translates data formats, an anti-corruption layer translates between incompatible domain models, ensuring that legacy system quirks do not leak into new application code.

+--------------------------------------------------------------------+
| ANTI-CORRUPTION LAYER ARCHITECTURE |
+--------------------------------------------------------------------+
| |
| Modern Domain Model ACL Legacy Domain Model |
| +------------------+ +----------+ +----------------------+ |
| | | | | | | |
| | Beneficiary | | Adapter | | CLIENT_RECORD | |
| | - id: UUID |<-->| Layer |<-->| - CLT_NBR: string | |
| | - household: ref | | | | - HH_CD: int | |
| | - services: list | +----+-----+ | - SVC_1..SVC_10: bit | |
| | | | | | |
| +------------------+ +----v-----+ +----------------------+ |
| | | |
| Service | Facade | SERVICE_MASTER |
| - type: enum | Layer | - SVC_CD: char(3) |
| - startDate: date | | - STRT_DT: int (YYYYMMDD) |
| - provider: ref +----+-----+ - PRVDR_ID: varchar |
| | |
| Household +----v-----+ HOUSEHOLD_FILE |
| - members: list |Translator| - Binary fixed-width |
| - address: embedded | Layer | - EBCDIC encoding |
| | | - Packed decimal fields |
| +----------+ |
| |
+--------------------------------------------------------------------+

The anti-corruption layer comprises three components. The adapter layer handles protocol translation and connection management. The facade layer presents a simplified interface that hides legacy system complexity. The translator layer converts between domain concepts, handling the semantic mismatches that exist beyond mere format differences.

Consider a legacy system that models services as ten boolean flags on a client record (SVC_1 through SVC_10), where each flag position corresponds to a service type defined in a separate configuration table last updated in 2015. A modern application models services as a collection of service objects, each with type, dates, provider, and status. The anti-corruption layer maintains the mapping between flag positions and service types, instantiates service objects from active flags, and reverse-engineers dates from audit log entries when creating records in the modern format.

The translator handles semantic concepts that lack direct equivalents. The legacy system uses “inactive” status for both clients who completed programmes and clients who were removed for cause. The modern model distinguishes “completed,” “withdrawn,” and “terminated” statuses. The translator infers the appropriate modern status from combinations of legacy fields: inactive status plus completion date within 30 days of programme end suggests “completed”; inactive status plus notes containing specific keywords suggests “terminated.”

Anti-corruption layers require ongoing maintenance as understanding of the legacy domain model improves. Initial implementations make assumptions that later prove incorrect. A field assumed to be always populated turns out to be null for records created before 2018. A code value assumed to mean one thing actually means something different for a specific programme. The layer’s value lies partly in centralising these discoveries and their corrections.

Strangler fig pattern

The strangler fig pattern enables incremental replacement of legacy system functionality while maintaining continuous operation. Named after fig trees that gradually envelop and replace their host trees, this pattern routes requests through a facade that initially delegates everything to the legacy system, then progressively redirects functionality to new implementations.

+--------------------------------------------------------------------+
| STRANGLER FIG PROGRESSION |
+--------------------------------------------------------------------+
| |
| STAGE 1: Initial state STAGE 2: First migration |
| +---------------------+ +---------------------+ |
| | Facade | | Facade | |
| +----------+----------+ +----------+----------+ |
| | | |
| v +----------+----------+ |
| +----------+----------+ | | | |
| | | v v v |
| | Legacy System | +--------+ +--------+ +--------+ |
| | (all functions) | | New | | Legacy | | Legacy | |
| | | | Svc A | | Svc B | | Svc C | |
| +---------------------+ +--------+ +--------+ +--------+ |
| |
| STAGE 3: Partial migration STAGE 4: Complete |
| +---------------------+ +---------------------+ |
| | Facade | | Facade | |
| +----------+----------+ +----------+----------+ |
| | | |
| +----------+----------+ +----------+----------+ |
| | | | | | | |
| v v v v v v |
| +--------+ +--------+ +----+ +--------+ +--------+ +--------+ |
| | New | | New | |Leg | | New | | New | | New | |
| | Svc A | | Svc B | |C | | Svc A | | Svc B | | Svc C | |
| +--------+ +--------+ +----+ +--------+ +--------+ +--------+ |
| |
| Timeline: 18-36 months typical for full migration |
| |
+--------------------------------------------------------------------+

The facade serves as the single entry point for all consumers. During migration, it contains routing logic that directs each request to either the legacy implementation or the new implementation based on function and migration status. Consumers interact only with the facade and remain unaware of which system handles their requests.

Migration proceeds function by function. A grants management system with modules for applications, reviews, awards, and reporting might migrate in four phases over 24 months. The new application module launches first, with the facade routing application submissions to the new system while routing all other functions to legacy. Six months later, the review module migrates. The facade updates its routing rules, now sending applications and reviews to new implementations while awards and reporting continue on legacy. This progression continues until all functions run on new systems.

Data synchronisation between legacy and new systems during migration requires careful design. The simplest approach designates one system as authoritative for each data domain and replicates changes to the other. During grants migration, new applications create records in the new system, with a synchronisation process creating corresponding records in legacy for functions that still depend on it. When the award module migrates, it reads from both systems during a transition period, then cuts over to new-system-only reads once validation confirms data completeness.

+--------------------------------------------------------------------+
| DATA SYNCHRONISATION DURING MIGRATION |
+--------------------------------------------------------------------+
| |
| Phase: Applications migrated, Reviews on legacy |
| |
| +------------------+ +------------------+ |
| | New System | | Legacy System | |
| | | | | |
| | Applications [W] +------------------->| Applications [R] | |
| | | sync (5 min) | | |
| | Reviews [R] |<-------------------+ Reviews [W] | |
| | | sync (5 min) | | |
| | Awards [R] |<-------------------+ Awards [W] | |
| | | sync (daily) | | |
| +------------------+ +------------------+ |
| |
| [W] = Write authority [R] = Read replica |
| |
| Sync frequency based on data change rate and consistency needs |
| |
+--------------------------------------------------------------------+

File-based integration

File-based integration transfers data through files deposited in shared locations. This pattern suits legacy systems that lack APIs but can export data through batch processes or report generation. Modern systems process these files through scheduled jobs that parse, validate, transform, and load the data.

+--------------------------------------------------------------------+
| FILE-BASED INTEGRATION FLOW |
+--------------------------------------------------------------------+
| |
| +-------------+ +-------------+ +-------------+ |
| | Legacy | | Secure | | Modern | |
| | System | | Transfer | | System | |
| +------+------+ +------+------+ +------+------+ |
| | | | |
| | 1. Export | | |
| | (02:00 daily) | | |
| v | | |
| +------+------+ | | |
| | Export file | | | |
| | /outbound/ | | | |
| | donors_ | | | |
| | 20240115.csv| | | |
| +------+------+ | | |
| | | | |
| | 2. Transfer | | |
| | (SFTP push) | | |
| +------------------>| | |
| | | |
| +------+------+ | |
| | Staging | | |
| | /incoming/ | | |
| +------+------+ | |
| | | |
| | 3. Poll | |
| | (every 15min) | |
| +------------------>| |
| | |
| +------v------+ |
| | Validate | |
| | Transform | |
| | Load | |
| +-------------+ |
| |
+--------------------------------------------------------------------+

File formats vary based on legacy system capabilities. Fixed-width formats with positional fields require format specifications documenting column positions, widths, and data types. Delimited formats (CSV, TSV) need header row conventions and quoting rules. XML exports provide self-describing structure but may use proprietary schemas. The receiving system’s parser must handle the specific format characteristics of each legacy source.

A financial system integration receives daily transaction exports as fixed-width text files with the following structure:

+------------------------------------------------------------------+
| FIXED-WIDTH FILE FORMAT |
+------------------------------------------------------------------+
| |
| Position Length Field Type Example |
| -------- ------ -------------- -------- ------------------- |
| 1-8 8 Transaction ID Numeric 00012345 |
| 9-16 8 Date YYYYMMDD 20240115 |
| 17-28 12 Amount Decimal 000001500.00 |
| 29-32 4 Currency Alpha GBP |
| 33-40 8 Account Code Alphanum 4100-001 |
| 41-50 10 Cost Centre Alphanum PROG-00123 |
| 51-100 50 Description Alpha Grant payment Q1 |
| 101-101 1 Status Alpha P (Posted) |
| |
| Sample record: |
| 0001234520240115000001500.00GBP 4100-001 PROG-00123 Grant...P |
| |
+------------------------------------------------------------------+

File-based integration operates on batch schedules rather than real-time. A daily export creates a 24-hour maximum lag between legacy system changes and modern system visibility. Organisations requiring fresher data can increase export frequency, but legacy system batch processes often have minimum intervals imposed by system load constraints or vendor recommendations. A finance system that exports at 02:00 daily cannot easily shift to hourly exports if the export process takes 90 minutes and competes with overnight batch processing.

Error handling in file-based integration requires explicit design. Failed file transfers leave the modern system with stale data until the next successful transfer. Corrupt or malformed files need detection, alerting, and either automatic retry or manual intervention. Partial processing that loads some records before encountering an error needs rollback capability or idempotent loading that can safely reprocess files.

Database-level integration

Database-level integration reads directly from or writes directly to legacy system databases, bypassing application logic. This approach provides access to data without requiring legacy system modifications but carries substantial risks that limit its applicability.

Direct database access couples the integration to the legacy system’s internal schema, which may change without notice during vendor updates or patches. Application logic that validates, transforms, or enriches data gets bypassed, potentially exposing data in states the application would never present. Write operations that bypass application logic can create data integrity problems if the application enforces constraints in code rather than database constraints.

Database-level integration suits read-only access to stable tables where the alternative is no integration at all. A legacy HR system with no API and no export capability might allow read-only database access for organisational directory synchronisation. The integration reads from employee and department tables, accepting the risk that schema changes in vendor updates will require integration modifications.

Write operations through direct database access require extraordinary caution. Before implementing database writes, confirm that the legacy application has no triggers, stored procedures, or application logic that should execute during data changes. Test write operations extensively in non-production environments with production-representative data. Establish monitoring that detects data inconsistencies that might result from bypassed logic.

Screen scraping

Screen scraping extracts data from legacy system user interfaces when no other integration method exists. Robotic Process Automation (RPA) tools automate the sequence of UI interactions that a human user would perform, capturing displayed data and entering data into forms. This method serves as a last resort when systems provide no programmatic interfaces whatsoever.

Screen scraping creates brittle integrations sensitive to UI changes. A legacy system update that moves a field, changes a label, or modifies navigation paths breaks the automation. Vendor updates, browser changes, and even network latency variations can disrupt screen scraping processes. Maintenance burden typically exceeds other integration methods by 3-5 times.

Screen scraping suits temporary integrations with defined end dates. An organisation planning to replace a legacy system within 18 months might implement screen scraping to support immediate integration needs, accepting the maintenance burden as preferable to investing in more robust integration for a system scheduled for retirement. Extending screen scraping beyond its intended temporary period accumulates technical debt at accelerating rates.

Implementation

Implementing legacy integration begins with capability assessment. This assessment documents what the legacy system can expose, what protocols it supports, what data structures it uses, and what operational constraints apply.

The assessment inventory captures connection options available from the legacy system:

CapabilityAssessment questionsDocumentation needed
API availabilityDoes an API exist? What protocol (REST, SOAP, proprietary)? What authentication?API documentation, endpoint catalogue, authentication requirements
Database accessIs direct database connection permitted? What database platform? What credentials model?Connection strings, schema documentation, access approval
Export capabilitiesWhat export formats available? What scheduling options? What data scope?Export format specifications, scheduling constraints, file locations
Import capabilitiesCan the system receive inbound data? What formats? What validation?Import specifications, validation rules, error handling
Vendor supportWill vendor support integration? What integration options do they offer?Vendor contact, support agreement terms, integration product availability

Schema analysis for database integration requires mapping legacy tables and columns to integration requirements. Legacy systems often use abbreviated column names, overloaded fields, and implicit conventions that documentation does not capture. Analysis combines available documentation with data profiling that examines actual values, null frequencies, and value distributions.

Data profiling queries against a legacy donor table might reveal:

-- Profile DONOR_MASTER table
SELECT
COUNT(*) as total_records,
COUNT(DNR_EMAIL) as with_email,
COUNT(CASE WHEN DNR_EMAIL LIKE '%@%' THEN 1 END) as valid_email_format,
MIN(LST_UPD_DT) as oldest_update,
MAX(LST_UPD_DT) as newest_update,
COUNT(DISTINCT DNR_TYP) as donor_type_count
FROM DONOR_MASTER;
-- Results inform integration design:
-- total_records: 45,230
-- with_email: 31,456 (70% populated)
-- valid_email_format: 28,901 (92% of populated are valid)
-- oldest_update: 2008-03-15 (16 years of data)
-- newest_update: 2024-01-14 (actively maintained)
-- donor_type_count: 7 (need mapping for each)

Wrapper implementation follows a three-phase approach. The first phase establishes connectivity and basic data retrieval, validating that the integration can reach the legacy system and extract data. The second phase implements the full translation layer with proper error handling, logging, and monitoring. The third phase adds optimisations for caching, connection pooling, and bulk operations.

A wrapper implementation timeline for a moderately complex integration:

PhaseDurationActivitiesDeliverables
Connectivity2 weeksProtocol testing, authentication setup, basic queriesWorking connection, sample data retrieval
Translation4 weeksSchema mapping, transformation logic, error handlingComplete wrapper API, unit tests
Optimisation2 weeksCaching strategy, connection pooling, bulk operationsProduction-ready wrapper, load tests
Hardening2 weeksMonitoring, alerting, documentation, runbooksOperational documentation, monitoring dashboards

Anti-corruption layer implementation adds domain modelling work to wrapper development. The team must understand both the legacy domain model and the modern domain model well enough to build accurate translations. This understanding often emerges iteratively as edge cases reveal previously unknown legacy system behaviours.

Strangler fig implementation requires facade infrastructure that can route requests dynamically. The facade must support feature flags or configuration that controls routing without code deployment. A typical implementation uses a routing table that maps function identifiers to backend systems:

+------------------------------------------------------------------+
| STRANGLER FIG ROUTING TABLE |
+------------------------------------------------------------------+
| |
| Function Target Fallback Data Sync Status |
| -------------------- -------- --------- --------- --------- |
| grant.application NEW LEGACY NEW->LEG Active |
| grant.review LEGACY none LEG->NEW Pending |
| grant.award LEGACY none LEG->NEW Planned |
| grant.reporting LEGACY none LEG->NEW Planned |
| donor.lookup NEW LEGACY NEW<->LEG Active |
| donor.update LEGACY none LEG->NEW Pending |
| finance.transaction LEGACY none none Stable |
| |
| Status definitions: |
| - Active: Function routes to new system |
| - Pending: Migration in progress, testing underway |
| - Planned: Migration scheduled, not started |
| - Stable: No migration planned, permanent legacy routing |
| |
+------------------------------------------------------------------+

File-based integration implementation requires infrastructure for secure file transfer, staging areas, and processing pipelines. SFTP servers with appropriate access controls provide secure file exchange. Processing pipelines implement the parse-validate-transform-load sequence with appropriate error handling at each stage.

Consequences

Legacy integration patterns impose ongoing costs that organisations must budget for and staff appropriately. Wrapper maintenance includes updating translations when either system changes, troubleshooting data discrepancies, and handling performance issues as data volumes grow. Anti-corruption layers require deeper ongoing investment as domain understanding evolves. Strangler fig patterns require coordination across migration phases spanning months or years.

PatternInitial effortOngoing effortCoupling levelReplacement path
WrapperMediumLow-MediumLowClean removal when legacy retired
Anti-corruption layerHighMediumLowClean removal when legacy retired
Strangler figHighMedium-HighMedium initially, decreasingProgressive elimination
File-basedLow-MediumMediumVery lowSimple disconnection
Database directLowHighVery highDifficult extraction
Screen scrapingMediumVery highVery highDifficult extraction

Performance implications vary by pattern. Wrappers add latency to every request, typically 20-100ms for protocol translation plus network round-trip time. File-based integration introduces batch latency, 4-24 hours for daily batches. Database direct access performs well but creates tight coupling that makes changes expensive.

Data consistency challenges arise in patterns that maintain data in multiple systems. Synchronisation delays create windows where systems show different values for the same entity. Conflict resolution for bidirectional sync requires rules that may produce unexpected outcomes. Integration failures that go undetected allow systems to drift apart over time.

Security implications require explicit risk assessment. Direct database connections bypass application-layer security controls. File transfer paths need encryption in transit and access controls at rest. Wrapper services become security boundaries that require hardening. Screen scraping credentials embedded in automation scripts need secure storage.

Exit strategies differ significantly by pattern. Well-designed wrappers and anti-corruption layers can be removed cleanly when legacy systems retire, with minimal impact on consuming applications that depend only on the boundary interface. Database direct access and screen scraping create tight coupling that requires substantial rework to eliminate, as consuming applications have adapted to characteristics that reflect the legacy system’s internals.

Variants

Minimal integration

Organisations with severely constrained resources can implement minimal integration through scheduled exports processed by simple scripts. A weekly CSV export from a legacy donor system, processed by a Python script that validates records and loads them into a modern CRM, provides basic synchronisation without middleware infrastructure. This approach sacrifices timeliness and error handling sophistication for implementation simplicity.

Minimal integration suits scenarios where legacy data supplements rather than drives modern system operations. A modern programme management system that references historical data from legacy systems for background context can tolerate weekly synchronisation. A system that requires up-to-date legacy data for operational decisions cannot.

High-availability integration

Mission-critical integrations require redundancy and failover capabilities absent from basic implementations. High-availability wrapper deployments run multiple instances behind load balancers with health checks that remove failing instances. Database connection pooling with multiple connection paths provides resilience against network partitions. File-based integration implements redundant transfer paths and processing pipelines.

High-availability integration adds infrastructure complexity and operational overhead. Organisations should reserve this investment for integrations where failure causes significant operational impact. A read-only integration that feeds reporting dashboards warrants less availability investment than a write integration that processes financial transactions.

Cloud-mediated integration

Integration platforms offered by cloud providers can serve as intermediaries between legacy systems and modern applications. Services such as Azure Integration Services, AWS AppFlow, or open-source alternatives like Apache NiFi provide pre-built connectors, transformation capabilities, and monitoring. These platforms reduce custom development at the cost of platform dependency.

Cloud-mediated integration suits organisations already invested in particular cloud platforms. An organisation running modern workloads in Azure with staff experienced in Azure services may find Azure Logic Apps a natural fit for legacy integration. An organisation with multi-cloud strategy or data sovereignty requirements may prefer self-hosted integration platforms that avoid additional cloud dependencies.

Anti-patterns

Big bang integration attempts to connect a legacy system comprehensively rather than incrementally. A project to “fully integrate” the legacy finance system with all modern applications creates a multi-year initiative that delivers no value until completion, by which time requirements have shifted and key personnel have departed. Incremental integration delivers value early and adjusts to learning.

Transparent proxy attempts to make a legacy system appear exactly like a modern system by mimicking contemporary APIs without translating underlying concepts. Consumers discover legacy system behaviours bleeding through the supposedly transparent interface: unexpected null fields, implicit state machines, error messages referencing internal legacy codes. Effective integration explicitly translates between domains rather than attempting perfect transparency.

Optimistic synchronisation assumes that data synchronisation will succeed and fails to monitor for divergence. Systems drift apart over weeks or months without detection until a user reports inconsistent data. Synchronisation monitoring must track last successful sync times, record counts on each side, and spot-check specific records for agreement.

Immortal integration continues operating long after its intended end date. A screen scraping integration implemented for a 12-month transition continues operating three years later because the legacy system replacement got delayed, deprioritised, and eventually cancelled. The organisation now maintains expensive screen scraping infrastructure indefinitely. Integration projects need explicit criteria for reassessment and either completion or formalisation into permanent infrastructure.

Single-point expertise concentrates all legacy system knowledge in one staff member who built the integration. When that person leaves, the organisation loses ability to modify, troubleshoot, or even understand the integration. Documentation, knowledge transfer, and cross-training prevent expertise concentration from becoming organisational risk.

See also