Application Security
Application security encompasses the controls, configurations, and architectural patterns that protect software from unauthorised access, data exposure, and malicious exploitation. This discipline addresses the security of applications as deployed and operated, distinct from the secure development practices that occur during software creation. For mission-driven organisations running portfolios of SaaS platforms, custom-built tools, and legacy systems, application security determines whether sensitive beneficiary data, financial records, and operational information remain protected throughout the software lifecycle.
The security of an application depends on layers of defence working together. Authentication verifies identity. Authorisation enforces permissions. Data protection secures information at rest and in transit. Session management maintains secure state across interactions. Input validation prevents injection attacks. Each layer addresses distinct threat vectors, and weakness in any single layer creates exploitable vulnerabilities regardless of the strength of other controls.
- Authentication
- The process of verifying that a user, service, or system is who or what it claims to be. Authentication answers “who are you?” through credentials, tokens, or cryptographic proof.
- Authorisation
- The process of determining whether an authenticated entity has permission to perform a requested action or access a requested resource. Authorisation answers “what can you do?”
- Session
- A stateful interaction between a user and an application, maintained across multiple requests through tokens or identifiers. Sessions persist authentication state so users need not re-authenticate for every action.
- Input validation
- Verification that data received by an application conforms to expected format, type, length, and content before processing. Validation prevents malformed or malicious input from causing unintended behaviour.
- Security control
- A safeguard or countermeasure designed to protect confidentiality, integrity, or availability of information and systems. Controls may be technical, administrative, or physical.
Security architecture model
Application security operates across multiple layers, each providing defence against specific attack categories. Understanding this layered model clarifies where controls apply and how they interact.
+------------------------------------------------------------------+| APPLICATION SECURITY LAYERS |+------------------------------------------------------------------+| || +------------------------------------------------------------+ || | PERIMETER LAYER | || | WAF | Rate limiting | DDoS protection | TLS termination | || +-----------------------------+------------------------------+ || | || +-----------------------------v------------------------------+ || | AUTHENTICATION LAYER | || | Identity verification | MFA | Token issuance | SSO | || +-----------------------------+------------------------------+ || | || +-----------------------------v------------------------------+ || | AUTHORISATION LAYER | || | Permission checks | Role enforcement | Resource policies | || +-----------------------------+------------------------------+ || | || +-----------------------------v------------------------------+ || | SESSION LAYER | || | Session management | Token validation | State handling | || +-----------------------------+------------------------------+ || | || +-----------------------------v------------------------------+ || | INPUT LAYER | || | Validation | Sanitisation | Encoding | Parameterisation | || +-----------------------------+------------------------------+ || | || +-----------------------------v------------------------------+ || | DATA LAYER | || | Encryption at rest | Field-level protection | Masking | || +------------------------------------------------------------+ || |+------------------------------------------------------------------+Figure 1: Application security layers from perimeter to data storage
The perimeter layer intercepts requests before they reach application logic. Web application firewalls inspect HTTP traffic for attack patterns, rate limiters prevent abuse and denial-of-service, and TLS termination ensures encrypted transport. These controls operate independently of application code and protect against network-level and protocol-level attacks.
The authentication layer verifies identity claims. When a user presents credentials or a token, this layer determines whether the claimed identity is valid. Modern applications delegate authentication to identity providers rather than implementing credential storage directly, reducing the attack surface and enabling centralised security controls like multi-factor authentication.
The authorisation layer enforces access policy after identity is established. A user authenticated as “jane.smith@example.org” may have permission to view programme data but not financial records. Authorisation checks occur at every access attempt, comparing the authenticated identity against resource-specific policies.
The session layer maintains authenticated state across requests. HTTP is stateless, so applications issue session tokens or cookies that subsequent requests present to avoid re-authentication. Session security determines how long authentication persists, under what conditions sessions terminate, and how stolen tokens are detected and invalidated.
The input layer validates and sanitises all data entering the application. User-supplied content, API payloads, file uploads, and query parameters all pass through validation before processing. This layer prevents injection attacks, where malicious input manipulates application behaviour.
The data layer protects information at rest. Encryption transforms stored data into unreadable ciphertext. Field-level protection applies different safeguards to sensitive columns. Masking displays partial data to users who need visibility without full access.
Authentication patterns
Authentication mechanisms vary in strength, complexity, and user experience. The appropriate pattern depends on the sensitivity of protected resources, the technical capabilities of users, and the operational context including field deployment constraints.
Password-based authentication remains common despite well-documented weaknesses. Users submit username and password combinations, which the application or identity provider verifies against stored hashes. Password security depends on hash algorithm strength (bcrypt, Argon2, or scrypt with appropriate cost factors), password complexity requirements, and protections against credential stuffing attacks. Organisations should enforce minimum length of 12 characters without arbitrary complexity rules that encourage predictable patterns.
Multi-factor authentication combines something the user knows (password) with something they have (device, token) or something they are (biometric). The second factor substantially increases security because attackers must compromise multiple independent elements. Time-based one-time passwords (TOTP) generated by authenticator apps provide a practical second factor requiring no additional hardware. Hardware security keys using FIDO2/WebAuthn provide stronger protection against phishing because cryptographic verification binds the authentication to the legitimate site.
+------------------------------------------------------------------+| AUTHENTICATION FLOW WITH MFA |+------------------------------------------------------------------+
User Application Identity Provider | | | |----(1) Access request--->| | | | | | |---(2) Redirect---------->| | | | |<---(3) Login page-----------------------------------| | | | |----(4) Username + password------------------------->| | | | | | [Verify credentials] | | | | |<---(5) MFA challenge--------------------------------| | | | |----(6) TOTP code----------------------------------->| | | | | | [Verify TOTP] | | | | | |<--(7) ID token + claims--| | | | |<---(8) Session cookie----| | | | |Figure 2: Authentication flow with identity provider and multi-factor authentication
Token-based authentication uses cryptographically signed tokens rather than session identifiers stored server-side. JSON Web Tokens (JWT) contain encoded claims about the authenticated user, signed by the identity provider. Applications verify the signature and extract claims without database lookups. This pattern scales well and supports stateless architectures, but requires careful attention to token validation, expiration, and revocation.
Certificate-based authentication uses X.509 certificates to prove identity. The client presents a certificate during TLS handshake, and the server verifies the certificate chain and extracts identity from the certificate subject. This approach provides strong non-repudiation and works well for service-to-service authentication, but requires public key infrastructure for certificate issuance and lifecycle management.
Delegated authentication through protocols like OAuth 2.0 and OpenID Connect allows applications to accept identity assertions from trusted providers. Users authenticate to Google, Microsoft, or an organisational identity provider, which issues tokens the application accepts. This pattern eliminates password management from individual applications and enables centralised security controls, but creates dependency on external providers and requires careful configuration of trust relationships.
For organisations operating in field contexts with intermittent connectivity, authentication patterns must accommodate offline scenarios. Applications may cache authentication state locally, allowing continued access when network connectivity is unavailable. This creates a security tradeoff: longer cache validity improves usability but extends the window during which compromised devices can access data. A 4-hour offline authentication cache balances field operational needs against security exposure for most use cases, with shorter durations (1 hour) appropriate for highly sensitive applications.
Authorisation models
Authorisation determines what authenticated users can access and modify. The authorisation model defines how permissions are structured, assigned, and evaluated.
Role-based access control (RBAC) assigns permissions to roles, then assigns roles to users. A “Programme Manager” role grants access to programme data, budgets, and reports. Users receive role assignments based on their job function. RBAC simplifies permission management because administrators modify role definitions rather than individual user permissions, and role assignments map naturally to organisational structure.
+------------------------------------------------------------------+| ROLE-BASED ACCESS CONTROL MODEL |+------------------------------------------------------------------+| || +-------------+ +-------------+ +------------------+ || | USERS | | ROLES | | PERMISSIONS | || +-------------+ +-------------+ +------------------+ || | | | | | | || | jane.smith +---->| Programme +---->| Read programmes | || | | | Manager | | Edit programmes | || +-------------+ | +---->| View budgets | || +-------------+ | Submit reports | || +-------------+ +------------------+ || | | +-------------+ || | john.doe +---->| Field +---->| Read programmes | || | | | Officer | | Create cases | || +-------------+ | +---->| Edit own cases | || +-------------+ +------------------+ || +-------------+ || | | +-------------+ +------------------+ || | admin +---->| System +---->| All permissions | || | | | Admin | | | || +-------------+ +-------------+ +------------------+ || |+------------------------------------------------------------------+Figure 3: Role-based access control structure showing user-role-permission relationships
Attribute-based access control (ABAC) evaluates permissions based on attributes of the user, resource, action, and context. Rather than static role assignments, ABAC policies express rules like “users in the Kenya office can access Kenya programme data during business hours.” ABAC provides fine-grained control and handles complex authorisation requirements, but policy complexity can make security auditing difficult.
Relationship-based access control evaluates permissions based on the relationship between the user and the resource. A case worker can access cases they are assigned to; a supervisor can access cases of their direct reports. This model maps naturally to hierarchical and ownership-based access patterns common in case management and programme systems.
Effective authorisation implementation requires checking permissions at every access point, not just at the user interface. If a web form hides a button based on role, the corresponding API endpoint must independently verify authorisation. Attackers bypass client-side controls trivially; server-side enforcement provides actual security.
Permission granularity affects both security and usability. Overly coarse permissions (all-or-nothing access to a module) force administrators to grant excessive access. Overly fine permissions (separate controls for every field) create administrative burden and user confusion. A practical approach defines permissions at the functional level: “can create cases,” “can view case details,” “can close cases,” “can access all cases in programme.”
Authorisation decisions must be logged for audit purposes. Every access check, whether granted or denied, should produce an audit record including the user, requested resource, action, decision, and timestamp. These logs support security monitoring, incident investigation, and compliance demonstration.
Session management
Sessions maintain authentication state between HTTP requests. Session security controls determine vulnerability to session hijacking, fixation, and replay attacks.
Session identifiers must be cryptographically random, of sufficient length (128 bits minimum), and transmitted only over encrypted connections. Predictable or short session identifiers allow attackers to guess valid sessions. Session identifiers stored in cookies should include the Secure flag (transmit only over HTTPS), HttpOnly flag (inaccessible to JavaScript), and SameSite attribute (prevent cross-site request forgery).
Session lifecycle encompasses creation, maintenance, and termination. Sessions should be created only after successful authentication, never before. The session identifier issued after authentication must differ from any identifier present during the login process, preventing session fixation attacks where attackers inject a known session identifier before authentication.
Session duration balances security against user experience. Short sessions require frequent re-authentication, interrupting work. Long sessions extend exposure if credentials are compromised or devices lost. A reasonable baseline: 8-hour maximum session duration with 30-minute idle timeout. Sensitive operations (financial transactions, administrative functions) should require re-authentication regardless of session state.
Session termination must invalidate the session server-side, not merely delete the client cookie. When a user logs out, the application should mark the session as terminated in its session store. This prevents replay of captured session tokens. Forced termination capabilities allow administrators to invalidate all sessions for a user when account compromise is suspected.
Applications with concurrent session limits prevent credential sharing and provide visibility into account use. If a user authenticates on a new device, the application can either terminate existing sessions or notify the user of concurrent access. For protection systems handling sensitive beneficiary data, single concurrent session enforcement prevents informal credential sharing that bypasses individual accountability.
Data protection within applications
Data protection addresses confidentiality and integrity of information as applications process and store it. Protection mechanisms vary by data sensitivity, with stronger controls applied to highly sensitive categories.
Encryption at rest protects stored data from direct database access or storage media theft. Database-level encryption encrypts entire database files or tablespaces. Column-level encryption encrypts specific fields containing sensitive data. Application-level encryption encrypts data before sending to the database, providing protection even from database administrators.
For a case management system storing protection data, encryption strategy might apply different mechanisms by sensitivity:
| Data Category | Encryption Approach | Key Management |
|---|---|---|
| Case identifiers | Database-level (transparent) | Database-managed |
| Contact details | Column-level | Application-managed |
| Case notes | Application-level | Per-case keys |
| Attachments | File-level with envelope | Document-specific keys |
Transport encryption protects data in transit between clients and servers, and between application components. TLS 1.2 or 1.3 should be mandatory for all connections; older protocols contain known vulnerabilities. Certificate validation must be enforced; applications that accept invalid certificates are vulnerable to interception. Internal service-to-service communication should also use TLS, protecting against threats within the network perimeter.
Field-level protection applies different handling to specific data elements. Tokenisation replaces sensitive values with non-sensitive substitutes; the original value is stored separately in a token vault. Masking displays partial values (showing last four digits of a phone number). Redaction removes sensitive content entirely from certain views or exports. These techniques allow applications to handle records containing sensitive fields without exposing that sensitivity to all users and functions.
Data minimisation reduces protection burden by limiting what data applications collect and retain. Applications should request only data necessary for their function, avoid copying sensitive data between systems, and purge data when retention requirements expire. Every data element collected creates protection obligation; minimisation reduces the attack surface.
Input validation and output encoding
Input validation prevents applications from processing malformed or malicious data. All data entering an application from external sources is untrusted and must be validated before use.
Validation strategies define what input is acceptable:
Allowlist validation accepts only known-good input patterns. A phone number field might accept only digits, spaces, and a small set of punctuation characters. An email field validates against the RFC 5322 format. Allowlist validation is preferred because it explicitly defines acceptable input rather than attempting to identify all possible attacks.
Denylist validation rejects known-bad patterns. This approach attempts to filter attack signatures like SQL injection strings or script tags. Denylist validation is weaker because attackers continuously develop new techniques that bypass known-bad patterns.
Validation scope must address type, length, format, and business rules. Type validation confirms data matches expected types (integer, string, date). Length validation prevents buffer overflows and denial-of-service through oversized input. Format validation verifies structural patterns (email format, date format). Business rule validation checks logical constraints (end date after start date, quantity within valid range).
Validation must occur server-side regardless of client-side validation. Client-side validation improves user experience by providing immediate feedback, but attackers bypass client-side controls by manipulating requests directly. Server-side validation provides the authoritative security check.
Output encoding prevents injection when applications render data in different contexts. Data safe in one context becomes dangerous in another. A beneficiary name containing <script> is harmless in a database but executes as JavaScript if rendered unencoded in HTML.
Context-specific encoding transforms data for safe inclusion in the target context:
| Output Context | Encoding Required | Example |
|---|---|---|
| HTML body | HTML entity encoding | < becomes < |
| HTML attribute | Attribute encoding | " becomes " |
| JavaScript | JavaScript encoding | ' becomes \x27 |
| URL parameter | URL encoding | space becomes %20 |
| SQL | Parameterised queries | Values passed separately from SQL |
| LDAP | LDAP encoding | Special characters escaped |
Parameterised queries prevent SQL injection by separating SQL structure from data values. Rather than concatenating user input into SQL strings, parameterised queries pass values as typed parameters. The database engine processes the SQL structure and parameter values independently, preventing injected SQL from executing.
Vulnerable pattern (never use):
query = "SELECT * FROM users WHERE name = '" + userName + "'"Secure pattern (always use):
query = "SELECT * FROM users WHERE name = ?"parameters = [userName]Third-party component security
Modern applications incorporate third-party components including libraries, frameworks, and services. Each component inherits the security posture of its maintainers and introduces potential vulnerabilities.
Dependency management tracks third-party components and their versions. Applications should maintain explicit dependency manifests (package.json, requirements.txt, pom.xml) rather than ad-hoc inclusion. Manifests enable inventory of components, identification of vulnerabilities, and controlled updates.
Vulnerability monitoring identifies known security issues in dependencies. Databases like the National Vulnerability Database (NVD) and platform-specific sources (npm audit, GitHub Dependabot) track disclosed vulnerabilities. Regular scanning compares application dependencies against these databases, identifying components requiring updates.
A typical web application might include 200-500 direct and transitive dependencies. Security teams cannot review all code manually. Automated scanning provides scalable vulnerability identification. Configure scanning to run in continuous integration pipelines, blocking deployment when high-severity vulnerabilities are detected.
Update discipline keeps components current without destabilising applications. Security updates should deploy within defined timeframes based on severity:
| Severity | Update Timeline | Example Vulnerabilities |
|---|---|---|
| Critical | Within 48 hours | Remote code execution, authentication bypass |
| High | Within 7 days | Privilege escalation, data exposure |
| Medium | Within 30 days | Cross-site scripting, information disclosure |
| Low | Next release cycle | Minor issues, hardening improvements |
Supply chain security addresses the integrity of components from source to deployment. Attackers increasingly target build systems and package repositories to inject malicious code into legitimate packages. Mitigations include verifying package signatures, using lockfiles to pin exact versions, and reviewing changes before updating dependencies.
Security assessment approaches
Security assessment evaluates the effectiveness of application security controls through testing, review, and analysis. Different assessment types provide different perspectives on security posture.
Configuration review verifies that security settings match policy requirements. This assessment examines authentication configuration, session parameters, encryption settings, access control rules, and logging configuration. Configuration review identifies gaps between security policy and actual implementation without requiring code access or active testing.
Configuration review checklist for web applications:
- TLS configuration (protocol versions, cipher suites, certificate validity)
- Authentication settings (password policy, MFA requirements, lockout thresholds)
- Session configuration (timeout values, cookie attributes, concurrent session handling)
- Access control rules (role definitions, permission assignments, default-deny verification)
- Logging configuration (authentication events, authorisation decisions, security exceptions)
- Error handling (generic messages to users, detailed logging internally)
Vulnerability scanning uses automated tools to identify known vulnerability patterns. Scanners probe applications for common weaknesses: SQL injection points, cross-site scripting opportunities, insecure configurations, and missing security headers. Scanning provides broad coverage efficiently but generates false positives and misses logic-specific vulnerabilities.
Web application scanners to evaluate include OWASP ZAP (open source), Burp Suite (commercial with free edition), and Nikto (open source). Run scans against non-production environments to avoid disrupting operations or triggering security alerts.
Code review examines application source code for security weaknesses. Manual review by security-trained developers identifies vulnerabilities that automated tools miss, particularly authorisation logic errors, business logic flaws, and cryptographic implementation mistakes. Code review requires source access and security expertise, limiting its applicability to custom-developed applications.
Penetration testing simulates real-world attacks to identify exploitable vulnerabilities. Testers attempt to compromise application security using techniques attackers would employ. Penetration testing validates that theoretical vulnerabilities are actually exploitable and demonstrates impact. Testing should occur in isolated environments with appropriate authorisation; even well-intentioned testing against production systems risks disruption.
For organisations with limited security resources, prioritise assessment types based on application risk and capability:
+------------------------------------------------------------------+| ASSESSMENT APPROACH BY RISK AND CAPACITY |+------------------------------------------------------------------+| || APPLICATION RISK || Low Medium High || | | | || +------------------+-----------+------------+-----------------+ || | | | | | || | Limited capacity | Config | Config + | Config + | || | | review | scanning | scanning + | || | | | | vendor review | || +------------------+-----------+------------+-----------------+ || | | | | | || | Moderate | Config + | Config + | All assessment | || | capacity | scanning | scanning + | types including | || | | | code review| penetration | || +------------------+-----------+------------+-----------------+ || | | | | | || | Strong capacity | All | All + freq | All + freq + | || | | assessment| scanning | red team | || +------------------+-----------+------------+-----------------+ || |+------------------------------------------------------------------+Figure 4: Assessment approach selection based on application risk and organisational capacity
Vendor security evaluation
Procured applications, whether SaaS platforms or on-premises commercial software, require security evaluation before deployment. Organisations cannot directly assess vendor code but can evaluate security practices, certifications, and configurations.
Security questionnaires gather information about vendor security controls. Standardised questionnaires like CAIQ (Consensus Assessments Initiative Questionnaire) or SIG (Standardized Information Gathering) provide consistent evaluation criteria. Key areas to assess:
- Data encryption (at rest, in transit, key management)
- Authentication and access control capabilities
- Security development practices
- Vulnerability management and patching
- Incident response procedures
- Data residency and jurisdictional considerations
- Subprocessor use and security requirements
- Business continuity and disaster recovery
- Compliance certifications (ISO 27001, SOC 2)
Compliance certifications provide independent verification of security practices. SOC 2 Type II reports detail controls and testing results over a period (typically 12 months). ISO 27001 certification indicates implementation of an information security management system. These certifications do not guarantee security but indicate structured security programmes and external accountability.
Data processing agreements formalise security obligations contractually. For applications processing personal data, GDPR and similar regulations require documented agreements specifying security measures, breach notification obligations, and data handling restrictions. Review agreements for adequate security commitments before contract signature.
Configuration review evaluates security settings available within the vendor platform. Many security breaches occur not from vendor vulnerabilities but from misconfigured customer instances. Review default configurations against security requirements; disable unnecessary features, enable security controls, and restrict administrative access.
Evaluation depth should scale with risk. A note-taking application used by a few staff warrants basic questionnaire review. A case management system processing protection data warrants detailed evaluation including SOC 2 review, configuration assessment, and legal review of data processing terms.
SaaS security considerations
Software-as-a-service applications present distinct security considerations because organisations do not control infrastructure, cannot inspect code, and share platforms with other tenants.
Shared responsibility delineates security obligations between vendor and customer. The vendor secures infrastructure, platform, and application code. The customer secures configuration, access management, and data. Misunderstanding this boundary leads to security gaps where neither party addresses a control.
+--------------------------------------------------------------------+| SAAS SHARED RESPONSIBILITY |+--------------------------------------------------------------------+| || VENDOR RESPONSIBILITY || +--------------------------------------------------------------+ || | Physical security | Network | Compute | Storage | Platform | || +--------------------------------------------------------------+ || | Application code | Patching | Availability | Base security | || +--------------------------------------------------------------+ || || SHARED || +--------------------------------------------------------------+ || | Data classification | Encryption config | Security monitoring| || +--------------------------------------------------------------+ || || CUSTOMER RESPONSIBILITY || +--------------------------------------------------------------+ || | User access | Configuration | Data handling | Integration | || +--------------------------------------------------------------+ || | Training | Acceptable use | Incident response (own data) | || +--------------------------------------------------------------+ || |+--------------------------------------------------------------------+Figure 5: SaaS security responsibility model showing vendor, shared, and customer domains
Data residency determines where SaaS providers store and process data. For organisations operating under GDPR, storing personal data outside the EEA requires additional safeguards. For organisations handling protection data, US-headquartered providers create CLOUD Act exposure regardless of data storage location. Evaluate provider data residency options and select regions appropriate for data sensitivity and regulatory requirements.
API security protects programmatic access to SaaS platforms. APIs often provide broader access than user interfaces, and API credentials persist unlike interactive sessions. Implement API access controls including: dedicated service accounts for integrations (not personal accounts), minimal necessary permissions, credential rotation schedules (90 days maximum), and monitoring for anomalous API activity.
Data portability ensures ability to extract data if changing providers or if the vendor discontinues service. Evaluate export capabilities during procurement: what formats are available, what data can be exported, how frequently can exports run, and what happens to data after contract termination. Vendors limiting data export create lock-in and operational risk.
Monitoring and logging requirements
Security monitoring detects attacks, policy violations, and anomalous behaviour. Effective monitoring requires appropriate log generation, collection, retention, and analysis.
Authentication logging records all authentication attempts with sufficient detail for security analysis and incident investigation. Required elements: timestamp, user identifier, source IP address, authentication method, success/failure result, and failure reason if applicable. Authentication logs enable detection of brute force attacks, credential stuffing, and account compromise.
Authorisation logging records access control decisions, particularly denials. When users attempt actions beyond their permissions, logs should capture the attempt for investigation. Patterns of authorisation failures may indicate reconnaissance, privilege escalation attempts, or misconfigured permissions.
Administrative action logging creates audit trails for privileged operations. User creation, permission changes, configuration modifications, and data exports should all generate logs. Administrative logs support accountability, change tracking, and forensic investigation.
Security event logging captures application-specific security events: input validation failures, session anomalies, rate limit triggers, and security exceptions. These events indicate potential attacks or application issues requiring investigation.
Log retention must balance storage costs against investigation and compliance needs. Minimum retention periods:
| Log Category | Minimum Retention | Rationale |
|---|---|---|
| Authentication | 12 months | Account compromise investigation |
| Authorisation | 12 months | Access pattern analysis |
| Administrative | 24 months | Audit and compliance |
| Security events | 6 months | Active monitoring and response |
Log protection prevents tampering that could hide attack evidence. Write logs to systems where application compromises cannot modify or delete records. Centralised log management platforms (SIEM) provide tamper-evident storage and separate logs from source applications.
Alerting transforms logs into actionable notifications. Configure alerts for high-confidence security indicators: administrative authentication from unexpected locations, bulk data export, multiple failed authentication attempts, or authorisation failures for sensitive resources. Alert volume must remain manageable; excessive alerts cause fatigue and ignored warnings.
Implementation considerations
Application security implementation varies by organisational context, resource availability, and risk profile. The following guidance addresses different operational situations.
For organisations with limited IT capacity
Focus security efforts on highest-impact controls for the applications processing sensitive data. A pragmatic baseline applicable without dedicated security staff:
Authentication hardening provides substantial security improvement with manageable effort. Enable MFA for all users on systems processing sensitive data. Use identity provider MFA rather than application-specific implementations to centralise security controls. Configure password policies requiring 12+ characters without complexity rules that encourage predictable patterns.
Session timeout configuration reduces exposure from unattended devices. Set 30-minute idle timeouts for applications with sensitive data. Enable session termination on logout rather than mere cookie deletion.
Access review, conducted quarterly, identifies inappropriate access accumulation. Review user lists and role assignments for each sensitive application. Remove access for departed staff, adjust permissions for role changes, and question unexpected administrative access.
Vendor security review before procurement prevents security debt. For new SaaS applications handling sensitive data, request SOC 2 reports or equivalent evidence of security practices. Review data processing terms for adequate protection commitments.
For organisations with established IT functions
Organisations with dedicated IT teams can implement more comprehensive security controls:
Centralised identity management through a single identity provider enables consistent authentication policy across applications. Configure SSO integration for all compatible applications, enforcing MFA and session policies centrally. Applications not supporting SSO require individual attention to authentication configuration.
Security configuration baselines document required settings for each application category. Develop standard configurations for web applications (TLS settings, session parameters, logging requirements) and verify compliance through periodic review. Baselines accelerate secure deployment of new applications.
Vulnerability management processes track and remediate security issues systematically. Implement dependency scanning in build pipelines for custom applications. Maintain vulnerability tracking for commercial applications, correlating vendor notifications with deployed versions. Define response timeframes by severity and track remediation completion.
Security monitoring through centralised log collection enables threat detection and investigation. Forward authentication and security logs from applications to a SIEM or log management platform. Develop detection rules for common attack patterns. Establish incident response procedures for security alerts.
For field-deployed applications
Applications operating in field contexts with intermittent connectivity, shared devices, and physical security risks require adapted security approaches:
Offline authentication caching allows continued operation during connectivity outages but extends exposure from compromised devices. Configure cache duration based on application sensitivity: 4 hours for general applications, 1 hour for sensitive data, immediate re-authentication for protection case management.
Shared device configurations must prevent data leakage between users. Enable per-session data clearing. Disable password saving in browsers. Configure aggressive session timeouts (15-minute idle). Consider application-level PIN or pattern authentication in addition to platform authentication.
Remote wipe capability protects data on lost or stolen devices. Ensure mobile device management covers all devices running sensitive applications. Document and test wipe procedures. Communicate procedures to field staff so they report device loss promptly.
Legacy application considerations
Older applications may lack modern security controls, requiring compensating measures:
Network segmentation isolates legacy applications from other systems, limiting attack propagation. Place legacy applications in dedicated network segments with firewall rules restricting access to necessary connections only.
Web application firewalls provide input validation for applications with inadequate built-in protection. WAF rules can block common attack patterns before they reach vulnerable applications.
Enhanced monitoring compensates for weak application logging. Network-level monitoring captures traffic patterns even when applications log insufficiently. Anomaly detection identifies unusual access patterns.
Replacement planning acknowledges that compensating controls have limits. Document legacy application risks, track security debt, and prioritise replacement in application portfolio planning.