Offline System Configuration
Offline system configuration prepares applications and endpoints to function when network connectivity is unavailable. This task covers authentication caching so users can sign in without reaching identity providers, local data storage so work continues during outages, and synchronisation queues so changes made offline merge correctly when connectivity returns. Complete this configuration before deploying devices to field locations where connectivity is intermittent or absent for extended periods.
The outcome is a system that allows users to authenticate, access data, create and modify records, and perform core workflows entirely offline for a defined period, then synchronise all changes when connectivity resumes without data loss or corruption.
Prerequisites
| Requirement | Detail |
|---|---|
| Administrative access | Local administrator on Windows/macOS endpoints, or MDM with configuration profile deployment capability |
| Identity provider | Microsoft Entra ID, Okta, or Google Workspace with offline authentication support enabled at tenant level |
| Application compatibility | Applications must support offline operation; verify before proceeding |
| Storage capacity | Minimum 20GB free space for offline cache on each endpoint; 50GB recommended for data-intensive applications |
| Synchronisation backend | CouchDB, PouchDB-compatible server, or application-native sync service operational |
| Conflict resolution strategy | Documented policy for handling conflicting edits (last-write-wins, merge, manual review) |
| Test environment | Non-production endpoint and server for validation before field deployment |
| Time allocation | 2-4 hours per application for initial configuration; 30 minutes per endpoint for deployment |
Verify the endpoint meets storage requirements before beginning:
# Windows PowerShellGet-PSDrive C | Select-Object @{N='FreeGB';E={[math]::Round($_.Free/1GB,2)}}
# macOS/Linuxdf -h / | awk 'NR==2 {print $4}'Expected output shows at least 20GB available. Endpoints with less than 20GB free will experience cache eviction that degrades offline functionality.
Procedure
Configure authentication caching
Authentication caching stores credentials locally so users can sign in when the identity provider is unreachable. The cached credential allows access to the device and locally-cached application data but cannot generate new tokens for cloud services until connectivity returns.
- Enable Windows offline sign-in caching by configuring the cached logon count. Open Group Policy Editor (
gpedit.msc) on a standalone machine or create a Group Policy Object for domain-joined devices:
Computer Configuration └── Windows Settings └── Security Settings └── Local Policies └── Security Options └── Interactive logon: Number of previous logons to cacheSet the value to 10. This allows the 10 most recent users to sign in offline. Values above 25 are not recommended as they increase the credential theft attack surface.
For Microsoft Entra ID joined devices, configure Primary Refresh Token (PRT) caching:
# Verify PRT status dsregcmd /status | Select-String -Pattern "PRT|AzureAd"Expected output includes AzureAdPrt : YES. If the PRT shows NO, the device is not correctly Entra ID joined and offline authentication will fail.
- Configure macOS offline authentication by enabling the mobile account feature. For devices bound to a directory service:
# Create mobile account for directory user sudo /System/Library/CoreServices/ManagedClient.app/Contents/Resources/createmobileaccount -n usernameFor Entra ID joined Macs using the Microsoft Enterprise SSO plug-in, verify Platform SSO is enabled:
# Check Platform SSO status app-sso platform -sThe output should show the organisation’s identity provider domain registered. Platform SSO caches authentication tokens for 14 days by default.
Set the offline authentication validity period. Users can authenticate offline for a limited time before requiring network verification. Configure this in your identity provider:
For Entra ID, set the sign-in frequency in Conditional Access:
Entra admin centre └── Protection └── Conditional Access └── Policies └── [Your policy] └── Session └── Sign-in frequency: 14 daysFor Okta, configure the Global Session Policy:
Okta admin console └── Security └── Global Session Policy └── Default Rule └── Maximum session lifetime: 336 hours (14 days)The 14-day period balances security against field operational requirements. Shorter periods (7 days) suit lower-risk environments; longer periods (30 days) may be necessary for extended field deployments but increase risk if devices are compromised.
- Test offline authentication before deployment. With network connectivity active, sign in to the device normally. Then simulate offline conditions:
# Disable network adapter (Windows PowerShell, elevated) Disable-NetAdapter -Name "Wi-Fi" -Confirm:$false
# Or on macOS networksetup -setairportpower en0 offLock the device (Windows+L or Ctrl+Command+Q), then sign in again. Success confirms cached credentials are functional. Re-enable connectivity after testing:
# Windows Enable-NetAdapter -Name "Wi-Fi" -Confirm:$false
# macOS networksetup -setairportpower en0 onPassword changes invalidate cache
When users change their password while connected, the cached credential updates automatically. If a password is changed from another device while the field device is offline, the user cannot sign in until connectivity returns. Coordinate password resets with field deployment schedules.
Configure local data storage
Local data storage creates an offline copy of application data on the endpoint. The storage mechanism varies by application architecture: browser-based applications use IndexedDB or the Cache API, while native applications use local databases or file caches.
Configure browser-based application storage. Modern web applications built for offline use store data in IndexedDB. The default quota varies by browser and available disk space:
Browser Default quota Configuration method Chrome/Edge 60% of disk or 6GB min Cannot increase; design app within limits Firefox 50% of disk dom.indexedDB.storageOption.enabledin about:configSafari 1GB User prompt for additional storage For Chrome and Edge, verify the storage quota available to your application using Developer Tools (F12):
// Run in browser console navigator.storage.estimate().then(estimate => { console.log(`Quota: ${Math.round(estimate.quota / 1024 / 1024)} MB`); console.log(`Usage: ${Math.round(estimate.usage / 1024 / 1024)} MB`); });Request persistent storage to prevent the browser from evicting cached data under storage pressure:
// Application should call this on first load if (navigator.storage && navigator.storage.persist) { navigator.storage.persist().then(granted => { console.log(`Persistent storage: ${granted ? 'granted' : 'denied'}`); }); }- Configure Service Worker caching for application assets. The Service Worker intercepts network requests and serves cached responses when offline. Register the Service Worker in your application:
// In main application JavaScript if ('serviceWorker' in navigator) { navigator.serviceWorker.register('/sw.js') .then(registration => { console.log('SW registered:', registration.scope); }) .catch(error => { console.log('SW registration failed:', error); }); }The Service Worker script (sw.js) defines caching strategy:
const CACHE_NAME = 'app-cache-v1'; const OFFLINE_URLS = [ '/', '/index.html', '/app.js', '/styles.css', '/offline.html' ];
self.addEventListener('install', event => { event.waitUntil( caches.open(CACHE_NAME) .then(cache => cache.addAll(OFFLINE_URLS)) ); });
self.addEventListener('fetch', event => { event.respondWith( caches.match(event.request) .then(response => response || fetch(event.request)) .catch(() => caches.match('/offline.html')) ); });Configure native application offline storage. Applications like KoboToolbox, ODK Collect, and CommCare have built-in offline storage that requires explicit configuration.
For KoboToolbox/ODK Collect, configure in Settings:
ODK Collect └── Settings └── Form management └── Blank form update mode: Manual └── Auto-send: Off (prevents failed sends on poor connectivity) └── User interface └── Navigation: Swipes (works better offline)Download forms while connected:
Main menu └── Get Blank Form └── Select All └── Get SelectedVerify forms downloaded by checking the device storage:
# Android, via adb adb shell ls /storage/emulated/0/Android/data/org.odk.collect.android/files/projects/*/forms/For CommCare, configure offline sync depth:
CommCare HQ └── Project Settings └── Project Settings └── Advanced Settings └── Days of data to sync: 30The sync depth determines how many days of cases download for offline access. Setting 30 days downloads all cases modified in the past month. Reduce to 14 days if storage is constrained; increase to 90 days for long field deployments.
- Configure Microsoft 365 offline access. OneDrive Files On-Demand reduces storage requirements but requires connectivity. For offline field use, disable Files On-Demand and sync specific folders:
# Disable Files On-Demand (requires OneDrive restart) Set-ItemProperty -Path "HKCU:\Software\Microsoft\OneDrive" ` -Name "FilesOnDemandEnabled" -Value 0 -Type DWord
# Restart OneDrive Stop-Process -Name "OneDrive" -Force Start-Process "$env:LOCALAPPDATA\Microsoft\OneDrive\OneDrive.exe"Configure Outlook Cached Exchange Mode for offline email:
Outlook └── File └── Account Settings └── Account Settings └── [Select account] └── Change └── Use Cached Exchange Mode: Enabled └── Download email for the past: 12 monthsThe 12-month setting downloads approximately 2-5GB depending on email volume. Reduce to 3 months for storage-constrained devices.
Configure synchronisation queues
Synchronisation queues store changes made offline and transmit them when connectivity returns. The queue must persist across application restarts, handle transmission failures gracefully, and manage conflicts when the same record was modified both offline and on the server.
- Understand the queue architecture before configuration. A properly designed offline queue has three components:
+------------------------------------------------------------------------+ | OFFLINE QUEUE ARCHITECTURE | +------------------------------------------------------------------------+ | | | +------------------+ +------------------+ +----------------+ | | | Application | | Queue Store | | Network | | | | | | | | Monitor | | | | User creates | | - Pending ops | | | | | | or modifies +---->| - Timestamps +---->| Detects | | | | record | | - Retry count | | online | | | | | | - Conflict data | | | | | +------------------+ +--------+---------+ +--------+-------+ | | | | | | v v | | +--------+---------+ +------+------+ | | | Sync Engine |<----+ Trigger | | | | | | (online) | | | | - Batch ops | +-------------+ | | | - Handle errors | | | | - Resolve | | | | conflicts | | | +--------+---------+ | | | | | v | | +--------+---------+ | | | Server | | | | | | | | - Apply changes | | | | - Return status | | | | - Send updates | | | +------------------+ | | | +------------------------------------------------------------------------+Figure 1: Offline queue components showing data flow from application through queue to server
The queue store must use persistent storage (IndexedDB, SQLite, or filesystem) rather than memory, as queued operations must survive application restarts and device reboots.
- Configure PouchDB for browser-based applications. PouchDB provides offline-first storage that synchronises with CouchDB-compatible backends:
// Initialise local database const localDB = new PouchDB('field-data');
// Configure remote database const remoteDB = new PouchDB('https://couchdb.example.org/field-data', { auth: { username: 'fielduser', password: 'secure-password' } });
// Configure bidirectional sync with retry const sync = localDB.sync(remoteDB, { live: true, // Continuous sync when online retry: true, // Retry failed syncs batch_size: 100, // Documents per batch batches_limit: 5 // Concurrent batches });
// Handle sync events sync.on('change', info => { console.log(`Synced: ${info.docs.length} documents`); });
sync.on('paused', err => { if (err) { console.log('Sync paused due to error:', err); } else { console.log('Sync complete, waiting for changes'); } });
sync.on('error', err => { console.error('Sync failed:', err); });The batch_size of 100 and batches_limit of 5 prevents overwhelming limited bandwidth connections. For satellite links, reduce to batch_size: 25 and batches_limit: 2.
- Configure queue persistence for custom applications. If building custom offline functionality, implement a queue table:
-- SQLite schema for offline queue CREATE TABLE sync_queue ( id INTEGER PRIMARY KEY AUTOINCREMENT, operation TEXT NOT NULL, -- 'create', 'update', 'delete' entity_type TEXT NOT NULL, -- 'beneficiary', 'distribution', etc. entity_id TEXT NOT NULL, -- UUID of the record payload TEXT NOT NULL, -- JSON of the change created_at TEXT NOT NULL, -- ISO 8601 timestamp attempts INTEGER DEFAULT 0, -- Retry count last_attempt TEXT, -- Last sync attempt timestamp status TEXT DEFAULT 'pending', -- 'pending', 'syncing', 'failed', 'conflict' error_message TEXT, -- Last error if failed conflict_data TEXT -- Server version if conflict );
CREATE INDEX idx_queue_status ON sync_queue(status); CREATE INDEX idx_queue_entity ON sync_queue(entity_type, entity_id);Queue entries when offline:
async function queueOperation(operation, entityType, entityId, payload) { const db = await openDatabase(); await db.run( `INSERT INTO sync_queue (operation, entity_type, entity_id, payload, created_at) VALUES (?, ?, ?, ?, ?)`, [operation, entityType, entityId, JSON.stringify(payload), new Date().toISOString()] ); }- Configure the network monitor to trigger synchronisation. The application must detect connectivity changes and initiate sync:
// Browser-based network detection window.addEventListener('online', () => { console.log('Connection restored, starting sync'); startSync(); });
window.addEventListener('offline', () => { console.log('Connection lost, queueing operations'); });
// More reliable: periodic connectivity check async function checkConnectivity() { try { const response = await fetch('/api/ping', { method: 'HEAD', cache: 'no-store', timeout: 5000 }); return response.ok; } catch { return false; } }
// Check every 30 seconds setInterval(async () => { const online = await checkConnectivity(); if (online && hasQueuedOperations()) { startSync(); } }, 30000);The navigator.onLine property and online/offline events are unreliable indicators of actual connectivity. They indicate network interface state, not internet reachability. The periodic fetch check provides accurate connectivity status.
- Configure retry logic for failed synchronisation attempts. Exponential backoff prevents overwhelming the server when connectivity is unstable:
async function syncWithRetry(maxAttempts = 5) { let attempt = 0; let delay = 1000; // Start with 1 second
while (attempt < maxAttempts) { try { await performSync(); return { success: true }; } catch (error) { attempt++; if (attempt >= maxAttempts) { return { success: false, error: error.message }; } // Exponential backoff: 1s, 2s, 4s, 8s, 16s await new Promise(resolve => setTimeout(resolve, delay)); delay *= 2; } } }Configure conflict handling
Conflicts occur when the same record is modified both offline and on the server. The conflict resolution strategy must be configured before deployment, as unresolved conflicts cause data loss or require manual intervention.
Select a conflict resolution strategy appropriate to your data:
Last-write-wins applies the most recent change regardless of origin. Suitable for data where recency is more important than completeness, such as status updates or location data. Simple to implement but can lose information.
Server-wins always preserves the server version, discarding offline changes in conflict. Suitable for reference data that should not be modified offline. Prevents corruption but frustrates users who lose work.
Client-wins always preserves the offline change. Suitable for data entry scenarios where field staff are the authoritative source. Can overwrite legitimate server corrections.
Merge combines changes at the field level. If offline and server changes modified different fields of the same record, both changes apply. If they modified the same field, fall back to another strategy. Most complex but preserves most information.
Manual review flags conflicts for human resolution. Suitable for high-value data where automated resolution is unacceptable. Creates operational burden.
Implement last-write-wins in PouchDB. This is the default CouchDB/PouchDB behaviour using document revisions:
// PouchDB automatic conflict resolution keeps winning revision // Losing revisions become conflict leaves
// Check for conflicts localDB.get('doc-id', { conflicts: true }) .then(doc => { if (doc._conflicts) { console.log('Conflicts detected:', doc._conflicts); // Delete losing revisions return Promise.all( doc._conflicts.map(rev => localDB.remove('doc-id', rev) ) ); } });- Implement field-level merge for custom applications:
function mergeRecords(serverRecord, clientRecord, baseRecord) { const merged = { ...serverRecord }; const conflicts = [];
for (const field of Object.keys(clientRecord)) { if (field === '_id' || field === '_rev' || field === 'updated_at') { continue; }
const serverValue = serverRecord[field]; const clientValue = clientRecord[field]; const baseValue = baseRecord ? baseRecord[field] : undefined;
// Client changed, server unchanged: use client value if (clientValue !== baseValue && serverValue === baseValue) { merged[field] = clientValue; } // Server changed, client unchanged: use server value (already in merged) else if (serverValue !== baseValue && clientValue === baseValue) { // No action needed, server value already in merged } // Both changed to same value: no conflict else if (serverValue === clientValue) { // No action needed } // Both changed to different values: conflict else if (serverValue !== baseValue && clientValue !== baseValue) { conflicts.push({ field: field, serverValue: serverValue, clientValue: clientValue, baseValue: baseValue }); } }
return { merged, conflicts }; }- Configure conflict notification so users know when conflicts require attention:
function notifyConflict(record, conflicts) { // Store conflict for review const conflictEntry = { recordId: record._id, recordType: record.type, conflicts: conflicts, detectedAt: new Date().toISOString(), resolved: false };
conflictStore.add(conflictEntry);
// Show user notification if (Notification.permission === 'granted') { new Notification('Sync Conflict', { body: `Conflicting changes detected in ${record.type}. Review required.`, tag: `conflict-${record._id}` }); } }Configure offline period management
Offline period management controls how long systems can operate offline before requiring reconnection, and what happens when limits are exceeded.
Set maximum offline duration in application configuration. The duration depends on data sensitivity and staleness tolerance:
Data type Recommended maximum Rationale Reference data (locations, services) 30 days Changes infrequently Beneficiary lists 14 days Balance freshness with field needs Case management data 7 days Higher change frequency Financial/distribution data 3 days Requires near-real-time reconciliation User credentials 14 days Security vs accessibility trade-off Implement duration checking in the application:
function checkOfflineDuration() { const lastSync = localStorage.getItem('lastSyncTimestamp'); if (!lastSync) { return { valid: false, reason: 'Never synchronised' }; }
const daysSinceSync = (Date.now() - new Date(lastSync)) / (1000 * 60 * 60 * 24); const maxOfflineDays = 14; // Configure per application
if (daysSinceSync > maxOfflineDays) { return { valid: false, reason: `Last sync was ${Math.floor(daysSinceSync)} days ago (maximum: ${maxOfflineDays})`, daysSinceSync: daysSinceSync }; }
return { valid: true, daysSinceSync: daysSinceSync }; }- Implement grace period warnings before hard cutoff:
function getOfflineStatus() { const check = checkOfflineDuration(); const warningThreshold = 11; // Warn 3 days before 14-day limit
if (!check.valid) { return { status: 'expired', message: 'Offline period exceeded. Synchronisation required before continuing.', allowDataEntry: false }; }
if (check.daysSinceSync > warningThreshold) { return { status: 'warning', message: `Synchronise within ${14 - Math.floor(check.daysSinceSync)} days to continue offline access.`, allowDataEntry: true }; }
return { status: 'ok', message: `Last synchronised ${Math.floor(check.daysSinceSync)} days ago.`, allowDataEntry: true }; }- Display offline status prominently in the application interface:
+------------------------------------------------------------------+ | [OFFLINE MODE - Last sync: 2024-11-10] | | | | +------------------------------------------------------------+ | | | Warning: 11 days since last synchronisation. | | | | Connect within 3 days to maintain offline access. | | | +------------------------------------------------------------+ | | | | +------------------------------------------------------------+ | | | | | | | [Application Interface] | | | | | | | +------------------------------------------------------------+ | | | | Queued changes: 47 | Storage used: 1.2 GB / 5 GB | +------------------------------------------------------------------+Figure 2: Offline status display showing sync warning and queue status
Test offline operation
Testing confirms the configuration works before field deployment. Test the complete offline workflow, not just individual components.
- Create a test scenario that exercises all offline functionality:
Test Scenario: Complete Offline Workflow
Preconditions: - Device configured per procedures above - Test user account with appropriate permissions - Sample data loaded (minimum 100 records) - Known server state (snapshot for comparison)
Test Steps: 1. Verify current sync status (all data present) 2. Disconnect network (physical or software) 3. Authenticate to device (cached credentials) 4. Launch application 5. Verify existing data accessible 6. Create new record 7. Modify existing record 8. Delete record (if supported offline) 9. Close and relaunch application 10. Verify changes persisted locally 11. Reconnect network 12. Observe automatic sync 13. Verify changes appear on server 14. Verify server changes appear on device
Expected Results: - Steps 1-10 complete without errors while offline - Step 11-14 complete within 5 minutes of reconnection - No data loss in either direction- Execute the test and document results:
# Disconnect network networksetup -setairportpower en0 off # macOS # or Disable-NetAdapter -Name "Wi-Fi" # Windows PowerShell
# Verify offline (should fail) ping -c 1 8.8.8.8 || echo "Confirmed offline"
# Perform application tests...
# Reconnect network networksetup -setairportpower en0 on # macOS # or Enable-NetAdapter -Name "Wi-Fi" # Windows PowerShell
# Monitor sync completion tail -f /path/to/application/sync.log- Test the conflict resolution path:
Conflict Test Scenario:
1. Create record on Device A while online 2. Sync completes to server 3. Disconnect Device A 4. Modify record on Device A (offline) 5. Modify same record on server (different field) 6. Reconnect Device A 7. Observe conflict resolution
Expected: Field-level merge preserves both changes
Repeat with same-field modification: Expected: Configured strategy applies (last-write-wins, manual review, etc.)- Test offline duration limits:
# Simulate extended offline by adjusting system clock (test environment only) # WARNING: Do not do this on production devices
# macOS - set date 15 days in future sudo date -v+15d
# Launch application # Expected: "Offline period exceeded" warning, data entry blocked
# Reset date sudo sntp -sS time.apple.comVerification
After completing configuration, verify the system functions correctly offline:
# 1. Verify authentication cache# Disconnect network, lock screen, unlock with password# Success: User signs in without network error
# 2. Verify local storage# Check IndexedDB (browser console)indexedDB.databases().then(dbs => console.table(dbs));# Expected: Application database listed with non-zero size
# 3. Verify Service Worker (browser console)navigator.serviceWorker.getRegistrations().then(regs => console.log(regs));# Expected: Service worker registered for application scope
# 4. Verify sync queue# Check pending operations count in application# Create record offline, verify queue count increments
# 5. Verify reconnection sync# Reconnect network, verify queue count decrements# Check server for new recordRun the verification checklist:
| Item | Verification method | Expected result |
|---|---|---|
| Cached authentication | Sign in while offline | Successful sign-in |
| Local data available | Navigate to records while offline | All synced records visible |
| Create record offline | Complete data entry form | Record saved locally |
| Modify record offline | Edit existing record | Changes saved locally |
| Queue status visible | Check application status | Queue count accurate |
| Sync on reconnection | Restore network connectivity | Queue empties within 5 minutes |
| Data on server | Query server database | Offline changes present |
| Conflict handling | Trigger intentional conflict | Resolution per configured strategy |
Troubleshooting
| Symptom | Cause | Resolution |
|---|---|---|
| ”Cannot sign in” when offline | Cached credential not established | Sign in while online first, verify dsregcmd /status shows PRT cached |
| Application data missing offline | Data not synced before going offline | Verify sync completed; check lastSyncTimestamp in storage |
| ”Storage quota exceeded” error | IndexedDB quota full | Clear unnecessary data; request persistent storage permission |
| Service Worker not registering | HTTPS required | Service Workers require secure context; configure HTTPS or use localhost |
| Changes not syncing when online | Sync not triggered | Check network event listeners; verify periodic connectivity check runs |
| Sync fails with 409 Conflict | Unresolved conflict blocking queue | Implement conflict resolution; check conflict store for pending items |
| Data corruption after sync | Incomplete sync interrupted | Implement transaction wrapping; verify all-or-nothing sync batches |
| Duplicate records after sync | Missing ID reconciliation | Use server-assigned IDs or UUIDs; verify ID mapping table |
| ”Offline period exceeded” unexpectedly | Clock drift or manual change | Sync device time via NTP; check lastSyncTimestamp accuracy |
| Authentication fails after password change | Cached credential invalidated | Reconnect and sign in with new password to refresh cache |
| Mobile app stuck “syncing” | Background process killed by OS | Configure sync to complete in foreground; reduce batch size |
| PouchDB revision conflicts accumulating | Conflicts not being resolved | Implement conflict resolution routine; run periodic conflict cleanup |
| Outlook not showing offline email | Cached mode not enabled | Verify Cached Exchange Mode in account settings; rebuild OST if corrupted |
| OneDrive files unavailable offline | Files On-Demand enabled | Disable Files On-Demand or right-click files to “Always keep on this device” |
| ODK forms not submitting after reconnect | Auto-send disabled | Enable auto-send or manually submit from Send Finalized Forms |
Storage troubleshooting
If storage issues persist, analyse usage:
// Browser storage analysisasync function analyzeStorage() { const estimate = await navigator.storage.estimate(); const dbs = await indexedDB.databases();
console.log('Storage Overview:'); console.log(` Total quota: ${Math.round(estimate.quota / 1024 / 1024)} MB`); console.log(` Used: ${Math.round(estimate.usage / 1024 / 1024)} MB`); console.log(` Available: ${Math.round((estimate.quota - estimate.usage) / 1024 / 1024)} MB`); console.log(' Databases:', dbs.map(d => d.name).join(', '));}Sync troubleshooting
Enable verbose sync logging to diagnose issues:
// PouchDB sync debuggingPouchDB.debug.enable('pouchdb:http');
// Custom sync loggingsync.on('change', info => { console.log('Sync change:', JSON.stringify(info, null, 2));});
sync.on('error', err => { console.error('Sync error:', { name: err.name, message: err.message, status: err.status, docId: err.docId });});See also
- Offline Data Architecture provides architectural patterns for offline-first system design
- Data Synchronisation Setup covers server-side sync configuration
- Sync Conflict Resolution details conflict handling procedures
- Intermittent Connectivity Patterns documents design patterns for unreliable networks
- Low-Bandwidth Optimisation addresses performance on constrained connections