Skip to main content

Partner System Integration

Partner system integration establishes technical connections between your organisation’s infrastructure and partner systems to enable data exchange, shared workflows, or federated access. Execute this task after completing the partner technology assessment and securing appropriate data sharing agreements. The outcome is a functioning, monitored integration that moves data securely between systems according to agreed specifications.

Prerequisites

RequirementDetail
Partner assessmentPartner Technology Assessment completed with acceptable risk rating
Data sharing agreementSigned agreement specifying data types, retention, and deletion requirements
Integration requirementsDocumented data flows, frequency, volume, and format specifications
Network accessFirewall rules permitting traffic between systems (specific ports listed in architecture)
CredentialsService account credentials or API keys for both systems
Test environmentNon-production environment on both sides for integration testing
Monitoring accessAbility to configure alerts in your monitoring platform

Verify network connectivity exists before beginning integration work:

Terminal window
# Test connectivity to partner API endpoint
curl -I https://api.partner.example.org/health
# Expected: HTTP/2 200 or similar success response
# Test from server that will run integration
nc -zv api.partner.example.org 443
# Expected: Connection to api.partner.example.org 443 port [tcp/https] succeeded!

Confirm you have credentials with appropriate scope:

Terminal window
# Test API authentication (OAuth2 client credentials example)
curl -X POST https://auth.partner.example.org/oauth/token \
-d "grant_type=client_credentials" \
-d "client_id=${CLIENT_ID}" \
-d "client_secret=${CLIENT_SECRET}" \
-d "scope=read:beneficiaries write:referrals"
# Expected: JSON response containing access_token

Integration architecture

Partner integrations follow one of three patterns depending on data flow requirements. Select the pattern that matches your integration needs before proceeding with configuration.

+--------------------------------------------------+
| INTEGRATION PATTERNS |
+--------------------------------------------------+
PATTERN A: API-to-API (Real-time)
+------------------+ +------------------+
| | HTTPS | |
| Your System +--------->+ Partner API |
| | REST | |
| - App Server |<---------+ - Their App |
| - API Client | JSON | - Their Data |
+------------------+ +------------------+
PATTERN B: File Transfer (Batch)
+------------------+ +------------------+
| | SFTP | |
| Your System +--------->+ Partner SFTP |
| | CSV | |
| - ETL Process |<---------+ - File Store |
| - Scheduler | nightly | |
+------------------+ +------------------+
PATTERN C: Federated Identity (Access)
+------------------+ +------------------+
| | SAML | |
| Your IdP +--------->+ Partner App |
| | /OIDC | |
| - User Auth |<---------+ - SP Config |
| - Group Claims | Tokens | |
+------------------+ +------------------+

Figure 1: Three integration patterns for partner systems

The API-to-API pattern suits real-time data exchange where your system calls partner endpoints or receives webhook notifications. Use this pattern for referral submissions, status updates, or queries against partner data. Latency is measured in milliseconds to seconds.

The file transfer pattern suits batch data exchange where complete datasets move between systems on a schedule. Use this pattern for beneficiary list synchronisation, reporting data, or bulk updates. Latency is measured in hours, with transfers occurring at scheduled intervals.

The federated identity pattern suits scenarios where partner staff need access to your systems or your staff need access to partner systems without creating separate accounts. Use this pattern when the integration requirement is access rather than data movement.

Procedure

Pattern A: API-to-API integration

  1. Document the API contract between systems. Create a specification file that captures endpoints, authentication, data formats, and error handling:
integration-spec.yaml
integration:
name: "partner-referral-sync"
partner: "Example Partner Organisation"
version: "1.0"
created: "2024-11-15"
authentication:
type: "oauth2_client_credentials"
token_endpoint: "https://auth.partner.example.org/oauth/token"
scopes:
- "read:beneficiaries"
- "write:referrals"
token_expiry_seconds: 3600
endpoints:
submit_referral:
method: "POST"
url: "https://api.partner.example.org/v2/referrals"
content_type: "application/json"
rate_limit: "100/minute"
get_referral_status:
method: "GET"
url: "https://api.partner.example.org/v2/referrals/{referral_id}"
rate_limit: "300/minute"
data_mapping:
outbound:
- source: "case.client_name"
target: "beneficiary.full_name"
- source: "case.date_of_birth"
target: "beneficiary.dob"
format: "YYYY-MM-DD"
error_handling:
retry_codes: [429, 500, 502, 503, 504]
max_retries: 3
backoff_seconds: [1, 5, 30]
  1. Configure the API client in your application. Store credentials in your secrets management system, not in code or configuration files:
Terminal window
# Store credentials in secrets manager (example: HashiCorp Vault)
vault kv put secret/integrations/partner-example \
client_id="your-client-id" \
client_secret="your-client-secret" \
token_endpoint="https://auth.partner.example.org/oauth/token"
# Verify storage
vault kv get secret/integrations/partner-example

Configure your application to retrieve credentials at runtime:

integration_client.py
import hvac
import requests
from datetime import datetime, timedelta
class PartnerIntegrationClient:
def __init__(self, vault_path="secret/integrations/partner-example"):
self.vault = hvac.Client()
self.credentials = self.vault.secrets.kv.read_secret_version(
path=vault_path
)['data']['data']
self.token = None
self.token_expiry = None
def get_access_token(self):
if self.token and datetime.now() < self.token_expiry:
return self.token
response = requests.post(
self.credentials['token_endpoint'],
data={
'grant_type': 'client_credentials',
'client_id': self.credentials['client_id'],
'client_secret': self.credentials['client_secret'],
'scope': 'read:beneficiaries write:referrals'
}
)
response.raise_for_status()
token_data = response.json()
self.token = token_data['access_token']
# Refresh 5 minutes before expiry
self.token_expiry = datetime.now() + timedelta(
seconds=token_data['expires_in'] - 300
)
return self.token
  1. Implement the data transformation layer. Map fields between your data model and the partner’s expected format:
transformers.py
from datetime import datetime
class ReferralTransformer:
"""Transform internal case data to partner referral format."""
FIELD_MAPPING = {
'case.client_name': 'beneficiary.full_name',
'case.date_of_birth': 'beneficiary.dob',
'case.gender': 'beneficiary.gender',
'case.contact_phone': 'beneficiary.phone',
'case.needs_assessment': 'referral.needs',
'case.urgency': 'referral.priority',
}
GENDER_MAPPING = {
'male': 'M',
'female': 'F',
'other': 'O',
'prefer_not_to_say': 'U',
}
PRIORITY_MAPPING = {
'critical': 1,
'high': 2,
'medium': 3,
'low': 4,
}
def transform(self, case_data: dict) -> dict:
"""Transform case data to partner referral format."""
return {
'beneficiary': {
'full_name': case_data['client_name'],
'dob': self._format_date(case_data['date_of_birth']),
'gender': self.GENDER_MAPPING.get(
case_data.get('gender', '').lower(), 'U'
),
'phone': self._format_phone(case_data.get('contact_phone')),
},
'referral': {
'source_system': 'your-org-case-management',
'source_id': case_data['case_id'],
'needs': case_data.get('needs_assessment', []),
'priority': self.PRIORITY_MAPPING.get(
case_data.get('urgency', 'medium'), 3
),
'submitted_at': datetime.utcnow().isoformat() + 'Z',
}
}
def _format_date(self, date_value) -> str:
if isinstance(date_value, datetime):
return date_value.strftime('%Y-%m-%d')
return str(date_value)[:10]
def _format_phone(self, phone: str) -> str:
if not phone:
return None
# Remove non-digits, ensure international format
digits = ''.join(c for c in phone if c.isdigit())
if len(digits) == 10:
return f'+1{digits}' # Adjust country code as needed
return f'+{digits}'
  1. Implement error handling and retry logic. Partner APIs experience transient failures; your integration must handle them gracefully:
api_client.py
import time
import logging
from requests.exceptions import RequestException
logger = logging.getLogger(__name__)
class PartnerAPIClient(PartnerIntegrationClient):
RETRY_STATUS_CODES = {429, 500, 502, 503, 504}
MAX_RETRIES = 3
BACKOFF_SECONDS = [1, 5, 30]
def submit_referral(self, referral_data: dict) -> dict:
"""Submit referral with retry logic."""
url = "https://api.partner.example.org/v2/referrals"
for attempt in range(self.MAX_RETRIES + 1):
try:
response = requests.post(
url,
json=referral_data,
headers={
'Authorization': f'Bearer {self.get_access_token()}',
'Content-Type': 'application/json',
'X-Request-ID': self._generate_request_id(),
},
timeout=30
)
if response.status_code == 201:
logger.info(f"Referral submitted: {response.json()['id']}")
return response.json()
if response.status_code in self.RETRY_STATUS_CODES:
if attempt < self.MAX_RETRIES:
wait_time = self.BACKOFF_SECONDS[attempt]
logger.warning(
f"Retryable error {response.status_code}, "
f"waiting {wait_time}s (attempt {attempt + 1})"
)
time.sleep(wait_time)
continue
# Non-retryable error
response.raise_for_status()
except RequestException as e:
if attempt < self.MAX_RETRIES:
wait_time = self.BACKOFF_SECONDS[attempt]
logger.warning(f"Request failed: {e}, retrying in {wait_time}s")
time.sleep(wait_time)
else:
logger.error(f"Request failed after {self.MAX_RETRIES} retries")
raise
raise Exception("Max retries exceeded")
  1. Configure webhook reception if the partner sends status updates to your system. Set up an endpoint to receive and validate incoming webhooks:
webhook_handler.py
import hmac
import hashlib
from flask import Flask, request, jsonify
app = Flask(__name__)
WEBHOOK_SECRET = get_secret('partner-webhook-secret')
@app.route('/webhooks/partner/referral-status', methods=['POST'])
def handle_referral_status():
# Validate webhook signature
signature = request.headers.get('X-Webhook-Signature')
if not validate_signature(request.data, signature):
logger.warning("Invalid webhook signature")
return jsonify({'error': 'Invalid signature'}), 401
payload = request.json
# Process status update
referral_id = payload['referral_id']
new_status = payload['status']
# Update local record
update_referral_status(
partner_referral_id=referral_id,
status=new_status,
updated_at=payload['updated_at']
)
logger.info(f"Referral {referral_id} status updated to {new_status}")
return jsonify({'received': True}), 200
def validate_signature(payload: bytes, signature: str) -> bool:
expected = hmac.new(
WEBHOOK_SECRET.encode(),
payload,
hashlib.sha256
).hexdigest()
return hmac.compare_digest(f'sha256={expected}', signature)
  1. Deploy to your test environment and execute integration tests against the partner’s test environment:
Terminal window
# Deploy integration service to test environment
kubectl apply -f integration-service.yaml -n test
# Verify deployment
kubectl get pods -n test -l app=partner-integration
# Expected: Running status
# Execute integration test suite
pytest tests/integration/partner/ -v --env=test

Expected test output:

tests/integration/partner/test_authentication.py::test_token_acquisition PASSED
tests/integration/partner/test_authentication.py::test_token_refresh PASSED
tests/integration/partner/test_referral.py::test_submit_referral PASSED
tests/integration/partner/test_referral.py::test_get_referral_status PASSED
tests/integration/partner/test_referral.py::test_invalid_referral_rejected PASSED
tests/integration/partner/test_webhook.py::test_status_webhook_received PASSED
tests/integration/partner/test_webhook.py::test_invalid_signature_rejected PASSED
========================= 7 passed in 45.23s =========================

Pattern B: File transfer integration

  1. Configure SFTP connectivity to the partner’s file transfer server. Generate an SSH key pair for authentication:
Terminal window
# Generate SSH key pair for SFTP authentication
ssh-keygen -t ed25519 -f ~/.ssh/partner-sftp -C "integration@yourorg.example.org"
# Do not set passphrase for automated transfers (use key management instead)
# Display public key to send to partner
cat ~/.ssh/partner-sftp.pub
# Output: ssh-ed25519 AAAA... integration@yourorg.example.org

Store the private key in your secrets management system:

Terminal window
# Store private key in secrets manager
vault kv put secret/integrations/partner-sftp \
private_key=@~/.ssh/partner-sftp \
hostname="sftp.partner.example.org" \
username="yourorg-integration" \
remote_path="/incoming/yourorg"
# Remove local copy of private key
rm ~/.ssh/partner-sftp
  1. Test SFTP connectivity once the partner has added your public key:
Terminal window
# Test connection
sftp -i /path/to/key yourorg-integration@sftp.partner.example.org
# Expected: Connected to sftp.partner.example.org.
# sftp>
# List remote directory
sftp> ls -la /incoming/yourorg
# Expected: Directory listing (possibly empty)
sftp> quit
  1. Implement the file generation process. Create a scheduled job that extracts data, transforms it to the agreed format, and uploads to the partner:
file_transfer.py
import csv
import io
import paramiko
from datetime import datetime
class PartnerFileTransfer:
def __init__(self, vault_path="secret/integrations/partner-sftp"):
creds = get_vault_secret(vault_path)
self.hostname = creds['hostname']
self.username = creds['username']
self.private_key = paramiko.Ed25519Key.from_private_key(
io.StringIO(creds['private_key'])
)
self.remote_path = creds['remote_path']
def generate_beneficiary_extract(self) -> str:
"""Generate CSV extract of beneficiaries for partner."""
beneficiaries = get_beneficiaries_for_partner(
partner_id='partner-example',
since_last_sync=True
)
output = io.StringIO()
writer = csv.DictWriter(output, fieldnames=[
'source_id', 'full_name', 'dob', 'gender',
'registration_date', 'location', 'status'
])
writer.writeheader()
for b in beneficiaries:
writer.writerow({
'source_id': b['id'],
'full_name': b['name'],
'dob': b['date_of_birth'].strftime('%Y-%m-%d'),
'gender': b['gender'][0].upper() if b['gender'] else 'U',
'registration_date': b['registered_at'].strftime('%Y-%m-%d'),
'location': b['location_code'],
'status': b['status'],
})
return output.getvalue()
def upload_file(self, content: str, filename: str):
"""Upload file to partner SFTP server."""
transport = paramiko.Transport((self.hostname, 22))
transport.connect(username=self.username, pkey=self.private_key)
sftp = paramiko.SFTPClient.from_transport(transport)
try:
remote_file = f"{self.remote_path}/{filename}"
with sftp.file(remote_file, 'w') as f:
f.write(content)
logger.info(f"Uploaded {filename} to {self.hostname}")
finally:
sftp.close()
transport.close()
def run_daily_sync(self):
"""Execute daily file sync to partner."""
timestamp = datetime.utcnow().strftime('%Y%m%d_%H%M%S')
filename = f"beneficiaries_{timestamp}.csv"
content = self.generate_beneficiary_extract()
record_count = content.count('\n') - 1 # Subtract header
self.upload_file(content, filename)
# Log sync for audit
log_sync_event(
partner='partner-example',
filename=filename,
records=record_count,
direction='outbound'
)
  1. Configure the scheduled job. Use your organisation’s job scheduler to run the file transfer:
# kubernetes CronJob example
apiVersion: batch/v1
kind: CronJob
metadata:
name: partner-file-sync
namespace: integrations
spec:
schedule: "0 2 * * *" # Run at 02:00 UTC daily
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
containers:
- name: file-sync
image: yourorg/partner-integration:1.0
command: ["python", "-m", "file_transfer", "run_daily_sync"]
env:
- name: VAULT_ADDR
value: "https://vault.yourorg.example.org"
- name: VAULT_ROLE
value: "partner-integration"
restartPolicy: OnFailure
serviceAccountName: partner-integration
  1. Implement file reception if the partner sends files to you. Configure a watcher process to detect and process incoming files:
file_receiver.py
class PartnerFileReceiver(PartnerFileTransfer):
def __init__(self):
super().__init__()
self.incoming_path = "/incoming/from-partner"
self.processed_path = "/processed/from-partner"
def check_for_files(self) -> list:
"""Check partner SFTP for new files."""
transport = paramiko.Transport((self.hostname, 22))
transport.connect(username=self.username, pkey=self.private_key)
sftp = paramiko.SFTPClient.from_transport(transport)
try:
files = sftp.listdir(self.incoming_path)
return [f for f in files if f.endswith('.csv')]
finally:
sftp.close()
transport.close()
def download_and_process(self, filename: str):
"""Download file from partner and process contents."""
transport = paramiko.Transport((self.hostname, 22))
transport.connect(username=self.username, pkey=self.private_key)
sftp = paramiko.SFTPClient.from_transport(transport)
try:
remote_path = f"{self.incoming_path}/{filename}"
with sftp.file(remote_path, 'r') as f:
content = f.read().decode('utf-8')
# Process the file
records_processed = self.process_partner_file(content, filename)
# Move to processed folder
processed_path = f"{self.processed_path}/{filename}"
sftp.rename(remote_path, processed_path)
log_sync_event(
partner='partner-example',
filename=filename,
records=records_processed,
direction='inbound'
)
finally:
sftp.close()
transport.close()

Pattern C: Federated identity integration

  1. Gather federation requirements from the partner. You need the following information to configure SAML or OIDC federation:

    For SAML federation (your IdP, partner SP):

    • Partner’s Entity ID
    • Partner’s Assertion Consumer Service (ACS) URL
    • Required attributes (name, email, groups)
    • Signature requirements

    For OIDC federation (your IdP, partner relying party):

    • Partner’s redirect URI(s)
    • Required scopes
    • Required claims
  2. Configure your identity provider to trust the partner application. This example uses Keycloak; adapt commands for your IdP:

Terminal window
# Create client for partner application in Keycloak
# Using Keycloak Admin CLI
kcadm.sh create clients -r your-realm -s clientId=partner-example-app \
-s enabled=true \
-s protocol=saml \
-s 'attributes."saml.assertion.signature"=true' \
-s 'attributes."saml.server.signature"=true' \
-s 'attributes."saml_name_id_format"=email'
# Set ACS URL
kcadm.sh update clients/CLIENT_UUID -r your-realm \
-s 'redirectUris=["https://app.partner.example.org/saml/acs"]'
# Configure attribute mappings
kcadm.sh create clients/CLIENT_UUID/protocol-mappers/models -r your-realm \
-s name=email \
-s protocol=saml \
-s protocolMapper=saml-user-attribute-mapper \
-s 'config."user.attribute"=email' \
-s 'config."friendly.name"=email' \
-s 'config."attribute.name"=email'
  1. Export your IdP metadata and share with the partner:
Terminal window
# Export SAML metadata
curl https://idp.yourorg.example.org/realms/your-realm/protocol/saml/descriptor \
> yourorg-idp-metadata.xml
# Send metadata to partner for their SP configuration
  1. Configure group-based access control. Limit which users can access the partner application by group membership:
Terminal window
# Create group for partner application access
kcadm.sh create groups -r your-realm -s name=partner-example-users
# Create client role
kcadm.sh create clients/CLIENT_UUID/roles -r your-realm -s name=partner-user
# Map group to client role
kcadm.sh add-roles -r your-realm \
--gid GROUP_UUID \
--cclientid partner-example-app \
--rolename partner-user
# Configure client to require role
kcadm.sh update clients/CLIENT_UUID -r your-realm \
-s 'authorizationServicesEnabled=true' \
-s 'serviceAccountsEnabled=true'
  1. Test federation with a pilot user before enabling for all users:
Terminal window
# Add pilot user to access group
kcadm.sh update users/USER_UUID/groups/GROUP_UUID -r your-realm
# Pilot user navigates to partner application
# Expected: Redirect to your IdP, authenticate, return to partner app
  1. Document the federation relationship for ongoing management:
federation-record.yaml
federation:
partner: "Example Partner Organisation"
type: "saml"
direction: "your-idp-to-partner-sp"
established: "2024-11-15"
technical:
your_entity_id: "https://idp.yourorg.example.org/realms/your-realm"
partner_entity_id: "https://app.partner.example.org"
partner_acs: "https://app.partner.example.org/saml/acs"
signing_cert_expiry: "2026-11-15"
access_control:
group: "partner-example-users"
approver: "IT Manager"
review_frequency: "quarterly"
attributes_shared:
- email
- display_name
- department
contacts:
partner_technical: "techsupport@partner.example.org"
your_technical: "it@yourorg.example.org"

Monitoring and alerting configuration

Configure monitoring for all integration patterns to detect failures before they impact operations. Create alerts for authentication failures, data flow interruptions, and error rate increases.

prometheus-alerts.yaml
groups:
- name: partner-integration
rules:
- alert: PartnerIntegrationAuthFailure
expr: |
sum(rate(partner_api_auth_failures_total[5m])) > 0.1
for: 5m
labels:
severity: warning
annotations:
summary: "Partner API authentication failures detected"
description: "{{ $value }} auth failures/sec to partner API"
- alert: PartnerIntegrationHighErrorRate
expr: |
sum(rate(partner_api_requests_total{status=~"5.."}[10m]))
/ sum(rate(partner_api_requests_total[10m])) > 0.05
for: 10m
labels:
severity: warning
annotations:
summary: "Partner API error rate above 5%"
- alert: PartnerFileSyncMissed
expr: |
time() - partner_file_sync_last_success_timestamp > 90000
labels:
severity: critical
annotations:
summary: "Partner file sync has not completed in 25 hours"
- alert: PartnerWebhookBacklog
expr: |
partner_webhook_queue_depth > 100
for: 15m
labels:
severity: warning
annotations:
summary: "Partner webhook processing backlog"

Integration data flow follows this monitoring architecture:

+--------------------------------------------------------------------+
| MONITORING ARCHITECTURE |
+--------------------------------------------------------------------+
+------------------+ +------------------+ +------------------+
| | | | | |
| Integration +---->+ Metrics +---->+ Alert |
| Service | | Collector | | Manager |
| | | (Prometheus) | | |
| - API calls | | | | - Email |
| - File transfers | | - Request rates | | - Slack |
| - Webhooks | | - Error rates | | - PagerDuty |
| | | - Latencies | | |
+--------+---------+ +--------+---------+ +------------------+
| |
| v
| +--------+---------+
| | |
+-------------->+ Log Aggregator |
| (Loki/ELK) |
| |
| - Request logs |
| - Error details |
| - Audit trail |
+------------------+

Figure 2: Monitoring architecture for partner integrations

Verification

After completing integration setup, verify correct operation through systematic testing.

Test API connectivity and authentication:

Terminal window
# Verify token acquisition
curl -X POST https://auth.partner.example.org/oauth/token \
-d "grant_type=client_credentials" \
-d "client_id=${CLIENT_ID}" \
-d "client_secret=${CLIENT_SECRET}" \
-d "scope=read:beneficiaries"
# Expected response:
# {
# "access_token": "eyJhbG...",
# "token_type": "bearer",
# "expires_in": 3600
# }
# Test API endpoint with token
TOKEN=$(get_token)
curl -H "Authorization: Bearer ${TOKEN}" \
https://api.partner.example.org/v2/health
# Expected: {"status": "healthy"}

Test data transformation accuracy:

# Run transformation tests
pytest tests/unit/test_transformers.py -v
# Expected output:
# test_name_transformation PASSED
# test_date_format_conversion PASSED
# test_gender_mapping PASSED
# test_missing_optional_fields PASSED
# test_special_characters_handled PASSED

Test file transfer operation:

Terminal window
# Manual file transfer test
python -c "
from file_transfer import PartnerFileTransfer
pt = PartnerFileTransfer()
pt.upload_file('test,content\n1,data', 'test_file.csv')
print('Upload successful')
"
# Verify file arrived (coordinate with partner or check SFTP)
sftp yourorg-integration@sftp.partner.example.org
sftp> ls /incoming/yourorg/test_file.csv
# Expected: File listed with current timestamp

Test webhook reception:

Terminal window
# Send test webhook to your endpoint
curl -X POST https://api.yourorg.example.org/webhooks/partner/referral-status \
-H "Content-Type: application/json" \
-H "X-Webhook-Signature: sha256=COMPUTED_SIGNATURE" \
-d '{"referral_id": "test-123", "status": "received", "updated_at": "2024-11-15T10:00:00Z"}'
# Expected: {"received": true}

Verify monitoring is capturing metrics:

Terminal window
# Query Prometheus for integration metrics
curl -g 'http://prometheus:9090/api/v1/query?query=partner_api_requests_total'
# Expected: Non-zero counter values

End-to-end test

Execute a complete transaction through the integration: submit a test referral via API, verify it appears in the partner’s test system, trigger a status webhook, and confirm the status update appears in your system. This validates the entire integration path.

Troubleshooting

SymptomCauseResolution
401 Unauthorized on API callsToken expired or invalid credentialsVerify client_id and client_secret are correct; check token expiry handling refreshes before expiry
403 Forbidden after successful authenticationInsufficient scopes or partner-side authorisationVerify requested scopes match partner configuration; confirm partner has enabled access for your client
Connection refused to partner endpointNetwork connectivity or firewall issueVerify firewall rules permit outbound traffic to partner IP/port; test with nc -zv hostname port
SSL certificate verify failedCertificate chain issue or expired certificateVerify partner certificate validity with openssl s_client; update CA certificates if using self-signed
SFTP Permission deniedPublic key not configured or wrong usernameConfirm partner added your public key; verify username matches partner’s configuration exactly
File transfer succeeds but partner reports no dataWrong directory or filename patternConfirm remote path matches partner expectations; verify filename matches expected pattern
Webhook signature validation failsIncorrect secret or signature algorithm mismatchConfirm webhook secret matches partner configuration; verify signature algorithm (SHA256 vs SHA1)
Webhook processing delaysQueue backlog or slow processingCheck queue depth metrics; scale webhook processors; identify slow database queries
Data transformation produces incorrect valuesField mapping errors or format mismatchesReview transformer unit tests; add test case for failing scenario; compare raw vs transformed data
Partner reports duplicate submissionsRetry logic submitting same record multiple timesImplement idempotency keys; check partner’s deduplication logic; add request_id tracking
429 Too Many Requests errorsRate limit exceededImplement rate limiting client-side; add backoff on 429; request rate limit increase from partner
Intermittent 504 Gateway TimeoutPartner service under load or network latencyIncrease timeout values; implement circuit breaker; discuss with partner if persistent
Federation login fails with Invalid signatureCertificate mismatch between IdP and SPRe-export IdP metadata; verify partner imported current metadata; check certificate expiry
Users cannot access federated applicationNot in authorised group or role mapping issueVerify user group membership; check role mappings in IdP; test with known-good account
SAML assertion rejected by partnerClock skew or assertion validity windowSynchronise NTP on IdP server; adjust assertion validity duration; verify partner server time

Ongoing maintenance responsibilities

Document maintenance responsibilities in the data sharing agreement. The following table provides a template for responsibility assignment:

ActivityFrequencyYour responsibilityPartner responsibility
Credential rotationQuarterlyRotate API credentials; update secrets managerAccept new credentials; update their configuration
Certificate renewalBefore expiryRenew signing certificates; redistribute metadataUpdate SP trust configuration
Access reviewQuarterlyReview users with partner access; remove departed staffReview service account usage
Integration testingMonthlyExecute integration test suiteMaintain test environment availability
Monitoring reviewWeeklyReview error rates and latency metricsRespond to escalations
Security patchingAs neededPatch integration service componentsPatch their API infrastructure
Capacity planningAnnuallyProject volume growth; request rate limit increasesProvision capacity for projected load

See also