Skip to main content

Offline System Configuration

Offline system configuration prepares applications and endpoints to function when network connectivity is unavailable. This task covers authentication caching so users can sign in without reaching identity providers, local data storage so work continues during outages, and synchronisation queues so changes made offline merge correctly when connectivity returns. Complete this configuration before deploying devices to field locations where connectivity is intermittent or absent for extended periods.

The outcome is a system that allows users to authenticate, access data, create and modify records, and perform core workflows entirely offline for a defined period, then synchronise all changes when connectivity resumes without data loss or corruption.

Prerequisites

RequirementDetail
Administrative accessLocal administrator on Windows/macOS endpoints, or MDM with configuration profile deployment capability
Identity providerMicrosoft Entra ID, Okta, or Google Workspace with offline authentication support enabled at tenant level
Application compatibilityApplications must support offline operation; verify before proceeding
Storage capacityMinimum 20GB free space for offline cache on each endpoint; 50GB recommended for data-intensive applications
Synchronisation backendCouchDB, PouchDB-compatible server, or application-native sync service operational
Conflict resolution strategyDocumented policy for handling conflicting edits (last-write-wins, merge, manual review)
Test environmentNon-production endpoint and server for validation before field deployment
Time allocation2-4 hours per application for initial configuration; 30 minutes per endpoint for deployment

Verify the endpoint meets storage requirements before beginning:

Terminal window
# Windows PowerShell
Get-PSDrive C | Select-Object @{N='FreeGB';E={[math]::Round($_.Free/1GB,2)}}
# macOS/Linux
df -h / | awk 'NR==2 {print $4}'

Expected output shows at least 20GB available. Endpoints with less than 20GB free will experience cache eviction that degrades offline functionality.

Procedure

Configure authentication caching

Authentication caching stores credentials locally so users can sign in when the identity provider is unreachable. The cached credential allows access to the device and locally-cached application data but cannot generate new tokens for cloud services until connectivity returns.

  1. Enable Windows offline sign-in caching by configuring the cached logon count. Open Group Policy Editor (gpedit.msc) on a standalone machine or create a Group Policy Object for domain-joined devices:
Computer Configuration
└── Windows Settings
└── Security Settings
└── Local Policies
└── Security Options
└── Interactive logon: Number of previous logons to cache

Set the value to 10. This allows the 10 most recent users to sign in offline. Values above 25 are not recommended as they increase the credential theft attack surface.

For Microsoft Entra ID joined devices, configure Primary Refresh Token (PRT) caching:

Terminal window
# Verify PRT status
dsregcmd /status | Select-String -Pattern "PRT|AzureAd"

Expected output includes AzureAdPrt : YES. If the PRT shows NO, the device is not correctly Entra ID joined and offline authentication will fail.

  1. Configure macOS offline authentication by enabling the mobile account feature. For devices bound to a directory service:
Terminal window
# Create mobile account for directory user
sudo /System/Library/CoreServices/ManagedClient.app/Contents/Resources/createmobileaccount -n username

For Entra ID joined Macs using the Microsoft Enterprise SSO plug-in, verify Platform SSO is enabled:

Terminal window
# Check Platform SSO status
app-sso platform -s

The output should show the organisation’s identity provider domain registered. Platform SSO caches authentication tokens for 14 days by default.

  1. Set the offline authentication validity period. Users can authenticate offline for a limited time before requiring network verification. Configure this in your identity provider:

    For Entra ID, set the sign-in frequency in Conditional Access:

Entra admin centre
└── Protection
└── Conditional Access
└── Policies
└── [Your policy]
└── Session
└── Sign-in frequency: 14 days

For Okta, configure the Global Session Policy:

Okta admin console
└── Security
└── Global Session Policy
└── Default Rule
└── Maximum session lifetime: 336 hours (14 days)

The 14-day period balances security against field operational requirements. Shorter periods (7 days) suit lower-risk environments; longer periods (30 days) may be necessary for extended field deployments but increase risk if devices are compromised.

  1. Test offline authentication before deployment. With network connectivity active, sign in to the device normally. Then simulate offline conditions:
Terminal window
# Disable network adapter (Windows PowerShell, elevated)
Disable-NetAdapter -Name "Wi-Fi" -Confirm:$false
# Or on macOS
networksetup -setairportpower en0 off

Lock the device (Windows+L or Ctrl+Command+Q), then sign in again. Success confirms cached credentials are functional. Re-enable connectivity after testing:

Terminal window
# Windows
Enable-NetAdapter -Name "Wi-Fi" -Confirm:$false
# macOS
networksetup -setairportpower en0 on

Password changes invalidate cache

When users change their password while connected, the cached credential updates automatically. If a password is changed from another device while the field device is offline, the user cannot sign in until connectivity returns. Coordinate password resets with field deployment schedules.

Configure local data storage

Local data storage creates an offline copy of application data on the endpoint. The storage mechanism varies by application architecture: browser-based applications use IndexedDB or the Cache API, while native applications use local databases or file caches.

  1. Configure browser-based application storage. Modern web applications built for offline use store data in IndexedDB. The default quota varies by browser and available disk space:

    BrowserDefault quotaConfiguration method
    Chrome/Edge60% of disk or 6GB minCannot increase; design app within limits
    Firefox50% of diskdom.indexedDB.storageOption.enabled in about:config
    Safari1GBUser prompt for additional storage

    For Chrome and Edge, verify the storage quota available to your application using Developer Tools (F12):

// Run in browser console
navigator.storage.estimate().then(estimate => {
console.log(`Quota: ${Math.round(estimate.quota / 1024 / 1024)} MB`);
console.log(`Usage: ${Math.round(estimate.usage / 1024 / 1024)} MB`);
});

Request persistent storage to prevent the browser from evicting cached data under storage pressure:

// Application should call this on first load
if (navigator.storage && navigator.storage.persist) {
navigator.storage.persist().then(granted => {
console.log(`Persistent storage: ${granted ? 'granted' : 'denied'}`);
});
}
  1. Configure Service Worker caching for application assets. The Service Worker intercepts network requests and serves cached responses when offline. Register the Service Worker in your application:
// In main application JavaScript
if ('serviceWorker' in navigator) {
navigator.serviceWorker.register('/sw.js')
.then(registration => {
console.log('SW registered:', registration.scope);
})
.catch(error => {
console.log('SW registration failed:', error);
});
}

The Service Worker script (sw.js) defines caching strategy:

const CACHE_NAME = 'app-cache-v1';
const OFFLINE_URLS = [
'/',
'/index.html',
'/app.js',
'/styles.css',
'/offline.html'
];
self.addEventListener('install', event => {
event.waitUntil(
caches.open(CACHE_NAME)
.then(cache => cache.addAll(OFFLINE_URLS))
);
});
self.addEventListener('fetch', event => {
event.respondWith(
caches.match(event.request)
.then(response => response || fetch(event.request))
.catch(() => caches.match('/offline.html'))
);
});
  1. Configure native application offline storage. Applications like KoboToolbox, ODK Collect, and CommCare have built-in offline storage that requires explicit configuration.

    For KoboToolbox/ODK Collect, configure in Settings:

ODK Collect
└── Settings
└── Form management
└── Blank form update mode: Manual
└── Auto-send: Off (prevents failed sends on poor connectivity)
└── User interface
└── Navigation: Swipes (works better offline)

Download forms while connected:

Main menu
└── Get Blank Form
└── Select All
└── Get Selected

Verify forms downloaded by checking the device storage:

Terminal window
# Android, via adb
adb shell ls /storage/emulated/0/Android/data/org.odk.collect.android/files/projects/*/forms/

For CommCare, configure offline sync depth:

CommCare HQ
└── Project Settings
└── Project Settings
└── Advanced Settings
└── Days of data to sync: 30

The sync depth determines how many days of cases download for offline access. Setting 30 days downloads all cases modified in the past month. Reduce to 14 days if storage is constrained; increase to 90 days for long field deployments.

  1. Configure Microsoft 365 offline access. OneDrive Files On-Demand reduces storage requirements but requires connectivity. For offline field use, disable Files On-Demand and sync specific folders:
Terminal window
# Disable Files On-Demand (requires OneDrive restart)
Set-ItemProperty -Path "HKCU:\Software\Microsoft\OneDrive" `
-Name "FilesOnDemandEnabled" -Value 0 -Type DWord
# Restart OneDrive
Stop-Process -Name "OneDrive" -Force
Start-Process "$env:LOCALAPPDATA\Microsoft\OneDrive\OneDrive.exe"

Configure Outlook Cached Exchange Mode for offline email:

Outlook
└── File
└── Account Settings
└── Account Settings
└── [Select account]
└── Change
└── Use Cached Exchange Mode: Enabled
└── Download email for the past: 12 months

The 12-month setting downloads approximately 2-5GB depending on email volume. Reduce to 3 months for storage-constrained devices.

Configure synchronisation queues

Synchronisation queues store changes made offline and transmit them when connectivity returns. The queue must persist across application restarts, handle transmission failures gracefully, and manage conflicts when the same record was modified both offline and on the server.

  1. Understand the queue architecture before configuration. A properly designed offline queue has three components:
+------------------------------------------------------------------------+
| OFFLINE QUEUE ARCHITECTURE |
+------------------------------------------------------------------------+
| |
| +------------------+ +------------------+ +----------------+ |
| | Application | | Queue Store | | Network | |
| | | | | | Monitor | |
| | User creates | | - Pending ops | | | |
| | or modifies +---->| - Timestamps +---->| Detects | |
| | record | | - Retry count | | online | |
| | | | - Conflict data | | | |
| +------------------+ +--------+---------+ +--------+-------+ |
| | | |
| v v |
| +--------+---------+ +------+------+ |
| | Sync Engine |<----+ Trigger | |
| | | | (online) | |
| | - Batch ops | +-------------+ |
| | - Handle errors | |
| | - Resolve | |
| | conflicts | |
| +--------+---------+ |
| | |
| v |
| +--------+---------+ |
| | Server | |
| | | |
| | - Apply changes | |
| | - Return status | |
| | - Send updates | |
| +------------------+ |
| |
+------------------------------------------------------------------------+

Figure 1: Offline queue components showing data flow from application through queue to server

The queue store must use persistent storage (IndexedDB, SQLite, or filesystem) rather than memory, as queued operations must survive application restarts and device reboots.

  1. Configure PouchDB for browser-based applications. PouchDB provides offline-first storage that synchronises with CouchDB-compatible backends:
// Initialise local database
const localDB = new PouchDB('field-data');
// Configure remote database
const remoteDB = new PouchDB('https://couchdb.example.org/field-data', {
auth: {
username: 'fielduser',
password: 'secure-password'
}
});
// Configure bidirectional sync with retry
const sync = localDB.sync(remoteDB, {
live: true, // Continuous sync when online
retry: true, // Retry failed syncs
batch_size: 100, // Documents per batch
batches_limit: 5 // Concurrent batches
});
// Handle sync events
sync.on('change', info => {
console.log(`Synced: ${info.docs.length} documents`);
});
sync.on('paused', err => {
if (err) {
console.log('Sync paused due to error:', err);
} else {
console.log('Sync complete, waiting for changes');
}
});
sync.on('error', err => {
console.error('Sync failed:', err);
});

The batch_size of 100 and batches_limit of 5 prevents overwhelming limited bandwidth connections. For satellite links, reduce to batch_size: 25 and batches_limit: 2.

  1. Configure queue persistence for custom applications. If building custom offline functionality, implement a queue table:
-- SQLite schema for offline queue
CREATE TABLE sync_queue (
id INTEGER PRIMARY KEY AUTOINCREMENT,
operation TEXT NOT NULL, -- 'create', 'update', 'delete'
entity_type TEXT NOT NULL, -- 'beneficiary', 'distribution', etc.
entity_id TEXT NOT NULL, -- UUID of the record
payload TEXT NOT NULL, -- JSON of the change
created_at TEXT NOT NULL, -- ISO 8601 timestamp
attempts INTEGER DEFAULT 0, -- Retry count
last_attempt TEXT, -- Last sync attempt timestamp
status TEXT DEFAULT 'pending', -- 'pending', 'syncing', 'failed', 'conflict'
error_message TEXT, -- Last error if failed
conflict_data TEXT -- Server version if conflict
);
CREATE INDEX idx_queue_status ON sync_queue(status);
CREATE INDEX idx_queue_entity ON sync_queue(entity_type, entity_id);

Queue entries when offline:

async function queueOperation(operation, entityType, entityId, payload) {
const db = await openDatabase();
await db.run(
`INSERT INTO sync_queue
(operation, entity_type, entity_id, payload, created_at)
VALUES (?, ?, ?, ?, ?)`,
[operation, entityType, entityId, JSON.stringify(payload), new Date().toISOString()]
);
}
  1. Configure the network monitor to trigger synchronisation. The application must detect connectivity changes and initiate sync:
// Browser-based network detection
window.addEventListener('online', () => {
console.log('Connection restored, starting sync');
startSync();
});
window.addEventListener('offline', () => {
console.log('Connection lost, queueing operations');
});
// More reliable: periodic connectivity check
async function checkConnectivity() {
try {
const response = await fetch('/api/ping', {
method: 'HEAD',
cache: 'no-store',
timeout: 5000
});
return response.ok;
} catch {
return false;
}
}
// Check every 30 seconds
setInterval(async () => {
const online = await checkConnectivity();
if (online && hasQueuedOperations()) {
startSync();
}
}, 30000);

The navigator.onLine property and online/offline events are unreliable indicators of actual connectivity. They indicate network interface state, not internet reachability. The periodic fetch check provides accurate connectivity status.

  1. Configure retry logic for failed synchronisation attempts. Exponential backoff prevents overwhelming the server when connectivity is unstable:
async function syncWithRetry(maxAttempts = 5) {
let attempt = 0;
let delay = 1000; // Start with 1 second
while (attempt < maxAttempts) {
try {
await performSync();
return { success: true };
} catch (error) {
attempt++;
if (attempt >= maxAttempts) {
return { success: false, error: error.message };
}
// Exponential backoff: 1s, 2s, 4s, 8s, 16s
await new Promise(resolve => setTimeout(resolve, delay));
delay *= 2;
}
}
}

Configure conflict handling

Conflicts occur when the same record is modified both offline and on the server. The conflict resolution strategy must be configured before deployment, as unresolved conflicts cause data loss or require manual intervention.

  1. Select a conflict resolution strategy appropriate to your data:

    Last-write-wins applies the most recent change regardless of origin. Suitable for data where recency is more important than completeness, such as status updates or location data. Simple to implement but can lose information.

    Server-wins always preserves the server version, discarding offline changes in conflict. Suitable for reference data that should not be modified offline. Prevents corruption but frustrates users who lose work.

    Client-wins always preserves the offline change. Suitable for data entry scenarios where field staff are the authoritative source. Can overwrite legitimate server corrections.

    Merge combines changes at the field level. If offline and server changes modified different fields of the same record, both changes apply. If they modified the same field, fall back to another strategy. Most complex but preserves most information.

    Manual review flags conflicts for human resolution. Suitable for high-value data where automated resolution is unacceptable. Creates operational burden.

  2. Implement last-write-wins in PouchDB. This is the default CouchDB/PouchDB behaviour using document revisions:

// PouchDB automatic conflict resolution keeps winning revision
// Losing revisions become conflict leaves
// Check for conflicts
localDB.get('doc-id', { conflicts: true })
.then(doc => {
if (doc._conflicts) {
console.log('Conflicts detected:', doc._conflicts);
// Delete losing revisions
return Promise.all(
doc._conflicts.map(rev =>
localDB.remove('doc-id', rev)
)
);
}
});
  1. Implement field-level merge for custom applications:
function mergeRecords(serverRecord, clientRecord, baseRecord) {
const merged = { ...serverRecord };
const conflicts = [];
for (const field of Object.keys(clientRecord)) {
if (field === '_id' || field === '_rev' || field === 'updated_at') {
continue;
}
const serverValue = serverRecord[field];
const clientValue = clientRecord[field];
const baseValue = baseRecord ? baseRecord[field] : undefined;
// Client changed, server unchanged: use client value
if (clientValue !== baseValue && serverValue === baseValue) {
merged[field] = clientValue;
}
// Server changed, client unchanged: use server value (already in merged)
else if (serverValue !== baseValue && clientValue === baseValue) {
// No action needed, server value already in merged
}
// Both changed to same value: no conflict
else if (serverValue === clientValue) {
// No action needed
}
// Both changed to different values: conflict
else if (serverValue !== baseValue && clientValue !== baseValue) {
conflicts.push({
field: field,
serverValue: serverValue,
clientValue: clientValue,
baseValue: baseValue
});
}
}
return { merged, conflicts };
}
  1. Configure conflict notification so users know when conflicts require attention:
function notifyConflict(record, conflicts) {
// Store conflict for review
const conflictEntry = {
recordId: record._id,
recordType: record.type,
conflicts: conflicts,
detectedAt: new Date().toISOString(),
resolved: false
};
conflictStore.add(conflictEntry);
// Show user notification
if (Notification.permission === 'granted') {
new Notification('Sync Conflict', {
body: `Conflicting changes detected in ${record.type}. Review required.`,
tag: `conflict-${record._id}`
});
}
}

Configure offline period management

Offline period management controls how long systems can operate offline before requiring reconnection, and what happens when limits are exceeded.

  1. Set maximum offline duration in application configuration. The duration depends on data sensitivity and staleness tolerance:

    Data typeRecommended maximumRationale
    Reference data (locations, services)30 daysChanges infrequently
    Beneficiary lists14 daysBalance freshness with field needs
    Case management data7 daysHigher change frequency
    Financial/distribution data3 daysRequires near-real-time reconciliation
    User credentials14 daysSecurity vs accessibility trade-off

    Implement duration checking in the application:

function checkOfflineDuration() {
const lastSync = localStorage.getItem('lastSyncTimestamp');
if (!lastSync) {
return { valid: false, reason: 'Never synchronised' };
}
const daysSinceSync = (Date.now() - new Date(lastSync)) / (1000 * 60 * 60 * 24);
const maxOfflineDays = 14; // Configure per application
if (daysSinceSync > maxOfflineDays) {
return {
valid: false,
reason: `Last sync was ${Math.floor(daysSinceSync)} days ago (maximum: ${maxOfflineDays})`,
daysSinceSync: daysSinceSync
};
}
return { valid: true, daysSinceSync: daysSinceSync };
}
  1. Implement grace period warnings before hard cutoff:
function getOfflineStatus() {
const check = checkOfflineDuration();
const warningThreshold = 11; // Warn 3 days before 14-day limit
if (!check.valid) {
return {
status: 'expired',
message: 'Offline period exceeded. Synchronisation required before continuing.',
allowDataEntry: false
};
}
if (check.daysSinceSync > warningThreshold) {
return {
status: 'warning',
message: `Synchronise within ${14 - Math.floor(check.daysSinceSync)} days to continue offline access.`,
allowDataEntry: true
};
}
return {
status: 'ok',
message: `Last synchronised ${Math.floor(check.daysSinceSync)} days ago.`,
allowDataEntry: true
};
}
  1. Display offline status prominently in the application interface:
+------------------------------------------------------------------+
| [OFFLINE MODE - Last sync: 2024-11-10] |
| |
| +------------------------------------------------------------+ |
| | Warning: 11 days since last synchronisation. | |
| | Connect within 3 days to maintain offline access. | |
| +------------------------------------------------------------+ |
| |
| +------------------------------------------------------------+ |
| | | |
| | [Application Interface] | |
| | | |
| +------------------------------------------------------------+ |
| |
| Queued changes: 47 | Storage used: 1.2 GB / 5 GB |
+------------------------------------------------------------------+

Figure 2: Offline status display showing sync warning and queue status

Test offline operation

Testing confirms the configuration works before field deployment. Test the complete offline workflow, not just individual components.

  1. Create a test scenario that exercises all offline functionality:
Test Scenario: Complete Offline Workflow
Preconditions:
- Device configured per procedures above
- Test user account with appropriate permissions
- Sample data loaded (minimum 100 records)
- Known server state (snapshot for comparison)
Test Steps:
1. Verify current sync status (all data present)
2. Disconnect network (physical or software)
3. Authenticate to device (cached credentials)
4. Launch application
5. Verify existing data accessible
6. Create new record
7. Modify existing record
8. Delete record (if supported offline)
9. Close and relaunch application
10. Verify changes persisted locally
11. Reconnect network
12. Observe automatic sync
13. Verify changes appear on server
14. Verify server changes appear on device
Expected Results:
- Steps 1-10 complete without errors while offline
- Step 11-14 complete within 5 minutes of reconnection
- No data loss in either direction
  1. Execute the test and document results:
Terminal window
# Disconnect network
networksetup -setairportpower en0 off # macOS
# or
Disable-NetAdapter -Name "Wi-Fi" # Windows PowerShell
# Verify offline (should fail)
ping -c 1 8.8.8.8 || echo "Confirmed offline"
# Perform application tests...
# Reconnect network
networksetup -setairportpower en0 on # macOS
# or
Enable-NetAdapter -Name "Wi-Fi" # Windows PowerShell
# Monitor sync completion
tail -f /path/to/application/sync.log
  1. Test the conflict resolution path:
Conflict Test Scenario:
1. Create record on Device A while online
2. Sync completes to server
3. Disconnect Device A
4. Modify record on Device A (offline)
5. Modify same record on server (different field)
6. Reconnect Device A
7. Observe conflict resolution
Expected: Field-level merge preserves both changes
Repeat with same-field modification:
Expected: Configured strategy applies (last-write-wins, manual review, etc.)
  1. Test offline duration limits:
Terminal window
# Simulate extended offline by adjusting system clock (test environment only)
# WARNING: Do not do this on production devices
# macOS - set date 15 days in future
sudo date -v+15d
# Launch application
# Expected: "Offline period exceeded" warning, data entry blocked
# Reset date
sudo sntp -sS time.apple.com

Verification

After completing configuration, verify the system functions correctly offline:

Terminal window
# 1. Verify authentication cache
# Disconnect network, lock screen, unlock with password
# Success: User signs in without network error
# 2. Verify local storage
# Check IndexedDB (browser console)
indexedDB.databases().then(dbs => console.table(dbs));
# Expected: Application database listed with non-zero size
# 3. Verify Service Worker (browser console)
navigator.serviceWorker.getRegistrations().then(regs => console.log(regs));
# Expected: Service worker registered for application scope
# 4. Verify sync queue
# Check pending operations count in application
# Create record offline, verify queue count increments
# 5. Verify reconnection sync
# Reconnect network, verify queue count decrements
# Check server for new record

Run the verification checklist:

ItemVerification methodExpected result
Cached authenticationSign in while offlineSuccessful sign-in
Local data availableNavigate to records while offlineAll synced records visible
Create record offlineComplete data entry formRecord saved locally
Modify record offlineEdit existing recordChanges saved locally
Queue status visibleCheck application statusQueue count accurate
Sync on reconnectionRestore network connectivityQueue empties within 5 minutes
Data on serverQuery server databaseOffline changes present
Conflict handlingTrigger intentional conflictResolution per configured strategy

Troubleshooting

SymptomCauseResolution
”Cannot sign in” when offlineCached credential not establishedSign in while online first, verify dsregcmd /status shows PRT cached
Application data missing offlineData not synced before going offlineVerify sync completed; check lastSyncTimestamp in storage
”Storage quota exceeded” errorIndexedDB quota fullClear unnecessary data; request persistent storage permission
Service Worker not registeringHTTPS requiredService Workers require secure context; configure HTTPS or use localhost
Changes not syncing when onlineSync not triggeredCheck network event listeners; verify periodic connectivity check runs
Sync fails with 409 ConflictUnresolved conflict blocking queueImplement conflict resolution; check conflict store for pending items
Data corruption after syncIncomplete sync interruptedImplement transaction wrapping; verify all-or-nothing sync batches
Duplicate records after syncMissing ID reconciliationUse server-assigned IDs or UUIDs; verify ID mapping table
”Offline period exceeded” unexpectedlyClock drift or manual changeSync device time via NTP; check lastSyncTimestamp accuracy
Authentication fails after password changeCached credential invalidatedReconnect and sign in with new password to refresh cache
Mobile app stuck “syncing”Background process killed by OSConfigure sync to complete in foreground; reduce batch size
PouchDB revision conflicts accumulatingConflicts not being resolvedImplement conflict resolution routine; run periodic conflict cleanup
Outlook not showing offline emailCached mode not enabledVerify Cached Exchange Mode in account settings; rebuild OST if corrupted
OneDrive files unavailable offlineFiles On-Demand enabledDisable Files On-Demand or right-click files to “Always keep on this device”
ODK forms not submitting after reconnectAuto-send disabledEnable auto-send or manually submit from Send Finalized Forms

Storage troubleshooting

If storage issues persist, analyse usage:

// Browser storage analysis
async function analyzeStorage() {
const estimate = await navigator.storage.estimate();
const dbs = await indexedDB.databases();
console.log('Storage Overview:');
console.log(` Total quota: ${Math.round(estimate.quota / 1024 / 1024)} MB`);
console.log(` Used: ${Math.round(estimate.usage / 1024 / 1024)} MB`);
console.log(` Available: ${Math.round((estimate.quota - estimate.usage) / 1024 / 1024)} MB`);
console.log(' Databases:', dbs.map(d => d.name).join(', '));
}

Sync troubleshooting

Enable verbose sync logging to diagnose issues:

// PouchDB sync debugging
PouchDB.debug.enable('pouchdb:http');
// Custom sync logging
sync.on('change', info => {
console.log('Sync change:', JSON.stringify(info, null, 2));
});
sync.on('error', err => {
console.error('Sync error:', {
name: err.name,
message: err.message,
status: err.status,
docId: err.docId
});
});

See also