Security Metrics
Security metrics are quantitative measurements that track control effectiveness, risk posture, and compliance status across an organisation’s security programme. This reference provides indicator definitions, calculation formulas, data sources, and benchmark thresholds for metrics applicable to mission-driven organisations operating with varying security maturity levels.
Metric Category Taxonomy
Security metrics divide into three primary categories based on what they measure and who consumes them. Operational metrics track day-to-day security activities and support technical decision-making. Compliance metrics demonstrate adherence to regulatory requirements and contractual obligations. Risk metrics quantify exposure and inform strategic resource allocation.
+------------------------------------------------------------------+| SECURITY METRICS TAXONOMY |+------------------------------------------------------------------+| || +------------------------+ || | OPERATIONAL | Technical teams || | METRICS | Daily/weekly review || | | || | - Detection | || | - Response | || | - Vulnerability | || | - Access control | || | - Availability | || +------------------------+ || | || v || +------------------------+ || | COMPLIANCE | Governance, auditors || | METRICS | Monthly/quarterly review || | | || | - Policy adherence | || | - Training completion | || | - Audit findings | || | - Certification | || +------------------------+ || | || v || +------------------------+ || | RISK | Leadership, board || | METRICS | Quarterly/annual review || | | || | - Exposure | || | - Control coverage | || | - Residual risk | || | - Trend indicators | || +------------------------+ || |+------------------------------------------------------------------+Figure 1: Security metric categories by audience and review frequency
The three categories interconnect through aggregation. Operational metrics feed into compliance metrics when aggregated over reporting periods. Compliance metrics contribute to risk metrics when mapped against control frameworks. A single underlying data point, such as an unpatched critical vulnerability, appears differently across categories: as a remediation target in operational metrics, as a policy deviation in compliance metrics, and as quantified exposure in risk metrics.
Operational Metrics
Operational metrics measure the execution of security activities. These metrics support capacity planning, process improvement, and technical decision-making. Data sources include security tools (SIEM, vulnerability scanners, EDR), identity platforms, and ticketing systems.
Detection Metrics
| Metric | Definition | Formula | Data Source | Benchmark |
|---|---|---|---|---|
| Mean Time to Detect (MTTD) | Average elapsed time from threat occurrence to detection | Sum of (detection_timestamp - occurrence_timestamp) / incident_count | SIEM, EDR | < 24 hours |
| Detection Rate | Percentage of threats detected by internal controls versus external notification | (internally_detected / total_incidents) × 100 | Incident records | > 80% |
| Alert Volume | Total security alerts generated per period | Count of alerts per day/week | SIEM | Trend baseline |
| Alert-to-Incident Ratio | Proportion of alerts escalated to confirmed incidents | (confirmed_incidents / total_alerts) × 100 | SIEM, ticketing | 5-15% |
| False Positive Rate | Percentage of alerts determined to be non-malicious | (false_positives / total_alerts) × 100 | SIEM, analyst records | < 70% |
| Coverage Ratio | Percentage of assets with active monitoring | (monitored_assets / total_assets) × 100 | Asset inventory, SIEM | > 95% |
MTTD calculation example: An organisation experiences 12 security incidents in a quarter. Detection timestamps minus occurrence timestamps yield: 2h, 18h, 4h, 72h, 1h, 8h, 36h, 3h, 12h, 6h, 24h, 48h. MTTD = (2+18+4+72+1+8+36+3+12+6+24+48) / 12 = 19.5 hours.
The occurrence timestamp requires estimation for incidents without precise intrusion time. Use the earliest evidence of compromise from forensic analysis. Where forensic data is unavailable, use the midpoint between the last known-good state and detection.
Response Metrics
| Metric | Definition | Formula | Data Source | Benchmark |
|---|---|---|---|---|
| Mean Time to Respond (MTTR) | Average elapsed time from detection to containment | Sum of (containment_timestamp - detection_timestamp) / incident_count | Incident records | < 4 hours (critical) |
| Mean Time to Recover | Average elapsed time from containment to normal operations | Sum of (recovery_timestamp - containment_timestamp) / incident_count | Incident records | < 24 hours |
| Incident Closure Rate | Percentage of incidents closed within SLA | (closed_within_SLA / total_incidents) × 100 | Ticketing system | > 90% |
| Escalation Rate | Percentage of incidents escalated beyond initial response team | (escalated_incidents / total_incidents) × 100 | Incident records | < 20% |
| Repeat Incident Rate | Percentage of incidents with same root cause as previous incident | (repeat_incidents / total_incidents) × 100 | Incident records | < 10% |
Response metrics require clear definitions of containment and recovery. Containment means the threat can no longer spread or cause additional damage, even if affected systems remain offline. Recovery means affected services operate normally with no ongoing restrictions.
Vulnerability Metrics
| Metric | Definition | Formula | Data Source | Benchmark |
|---|---|---|---|---|
| Vulnerability Density | Vulnerabilities per asset | total_vulnerabilities / total_assets | Vulnerability scanner | < 5 per asset |
| Critical Vulnerability Count | Number of CVSS 9.0+ vulnerabilities | Count where CVSS >= 9.0 | Vulnerability scanner | 0 (target) |
| Mean Time to Remediate (MTTR-V) | Average days from discovery to remediation | Sum of (remediation_date - discovery_date) / vuln_count | Vulnerability scanner | < 15 days (critical) |
| Patch Compliance Rate | Percentage of systems with current patches | (patched_systems / total_systems) × 100 | Patch management | > 95% |
| Vulnerability Age | Average days vulnerabilities remain open | Sum of days_open / open_vuln_count | Vulnerability scanner | < 30 days |
| Scan Coverage | Percentage of assets scanned in period | (scanned_assets / total_assets) × 100 | Vulnerability scanner | 100% |
Vulnerability metrics should incorporate Exploit Prediction Scoring System (EPSS) alongside CVSS for prioritisation. EPSS provides probability of exploitation within 30 days, enabling risk-based remediation sequencing. A CVSS 7.5 vulnerability with 0.95 EPSS score warrants faster remediation than a CVSS 9.0 vulnerability with 0.02 EPSS score.
MTTR-V calculation by severity:
| Severity | CVSS Range | Target MTTR-V | Maximum MTTR-V |
|---|---|---|---|
| Critical | 9.0 - 10.0 | 7 days | 15 days |
| High | 7.0 - 8.9 | 15 days | 30 days |
| Medium | 4.0 - 6.9 | 30 days | 90 days |
| Low | 0.1 - 3.9 | 90 days | 180 days |
Access Control Metrics
| Metric | Definition | Formula | Data Source | Benchmark |
|---|---|---|---|---|
| Privileged Account Ratio | Privileged accounts as percentage of total | (privileged_accounts / total_accounts) × 100 | Identity platform | < 5% |
| Orphaned Account Count | Active accounts without valid owner | Count where owner_status = inactive | Identity platform, HR system | 0 |
| Dormant Account Count | Accounts without login in 90 days | Count where last_login > 90 days | Identity platform | < 2% of accounts |
| MFA Adoption Rate | Accounts with MFA enabled | (mfa_enabled_accounts / total_accounts) × 100 | Identity platform | 100% |
| Failed Authentication Rate | Failed logins as percentage of total | (failed_logins / total_logins) × 100 | Identity platform | < 5% |
| Access Review Completion | Reviews completed by deadline | (completed_reviews / required_reviews) × 100 | Access governance | 100% |
| Access Request Cycle Time | Average days from request to provisioning | Sum of (provision_date - request_date) / request_count | Ticketing system | < 2 days |
The privileged account ratio benchmark of 5% assumes a standard organisational structure. Organisations with large technical teams or complex infrastructure legitimately have higher ratios. Calculate a baseline and track trend rather than comparing against external benchmarks.
Availability Metrics
| Metric | Definition | Formula | Data Source | Benchmark |
|---|---|---|---|---|
| Security Service Uptime | Percentage of time security services operational | (total_minutes - downtime_minutes) / total_minutes × 100 | Monitoring platform | > 99.9% |
| Backup Success Rate | Percentage of backups completed successfully | (successful_backups / attempted_backups) × 100 | Backup system | > 99% |
| Recovery Test Success | Percentage of recovery tests meeting RTO | (successful_tests / total_tests) × 100 | DR testing records | 100% |
Compliance Metrics
Compliance metrics demonstrate adherence to policies, regulations, and contractual requirements. These metrics support audit preparation, regulatory reporting, and governance oversight. Data sources include policy management systems, training platforms, audit findings, and certification bodies.
Policy Adherence Metrics
| Metric | Definition | Formula | Data Source | Benchmark |
|---|---|---|---|---|
| Policy Exception Count | Active approved policy exceptions | Count of approved exceptions | GRC platform | Trend baseline |
| Exception Age | Average days exceptions remain open | Sum of days_open / exception_count | GRC platform | < 180 days |
| Policy Violation Rate | Detected violations per period | violations / reporting_period | DLP, access logs, audit | Trend baseline |
| Control Implementation Rate | Percentage of required controls implemented | (implemented_controls / required_controls) × 100 | GRC platform | > 95% |
Policy exceptions require scheduled review. Track exception renewal rate to identify controls that consistently require exceptions, indicating potential policy misalignment with operational reality.
Training Metrics
| Metric | Definition | Formula | Data Source | Benchmark |
|---|---|---|---|---|
| Training Completion Rate | Percentage completing required training | (completed / required) × 100 | LMS | 100% |
| Training Currency | Percentage with training completed within validity period | (current / total) × 100 | LMS | > 95% |
| Phishing Simulation Click Rate | Percentage clicking simulated phishing | (clicked / recipients) × 100 | Phishing platform | < 5% |
| Phishing Report Rate | Percentage reporting simulated phishing | (reported / recipients) × 100 | Phishing platform | > 70% |
| Time to Complete Training | Average days from assignment to completion | Sum of (completion_date - assignment_date) / count | LMS | < 14 days |
Phishing metrics require careful interpretation. Click rates vary by simulation difficulty, department, and timing. Track trend over time with consistent simulation difficulty rather than comparing against external benchmarks. A 3% click rate on a sophisticated simulation indicates better awareness than a 2% click rate on an obvious simulation.
Audit Metrics
| Metric | Definition | Formula | Data Source | Benchmark |
|---|---|---|---|---|
| Open Audit Findings | Count of unresolved findings | Count where status = open | Audit tracking | Trend baseline |
| Finding Closure Rate | Findings closed within target date | (closed_on_time / total_findings) × 100 | Audit tracking | > 90% |
| Finding Age | Average days findings remain open | Sum of days_open / finding_count | Audit tracking | < 90 days |
| Repeat Findings | Findings on same issue as prior audit | Count where repeat = true | Audit tracking | 0 |
| Critical Finding Count | High-severity findings open | Count where severity = critical | Audit tracking | 0 |
Audit finding severity typically follows a four-level scale: critical (immediate risk requiring executive escalation), high (significant risk requiring remediation within 30 days), medium (moderate risk with 90-day remediation), and low (minor risk with 180-day remediation).
Certification Metrics
| Metric | Definition | Formula | Data Source | Benchmark |
|---|---|---|---|---|
| Certification Status | Current state of each certification | Enumeration | Certification register | All current |
| Certification Coverage | Percentage of operations under certification | (certified_operations / total_operations) × 100 | Scope documentation | Per requirement |
| Days to Recertification | Days until certification expires | expiry_date - current_date | Certification register | > 90 days |
| Nonconformity Count | Open nonconformities from certification audits | Count where status = open | Certification body reports | 0 major |
Risk Metrics
Risk metrics quantify security exposure for strategic decision-making. These metrics support resource allocation, risk acceptance decisions, and board reporting. Data sources include risk registers, control assessments, and aggregated operational data.
Exposure Metrics
| Metric | Definition | Formula | Data Source | Benchmark |
|---|---|---|---|---|
| Internet-Facing Asset Count | Assets directly accessible from internet | Count where internet_facing = true | Asset inventory | Minimised |
| External Attack Surface | Exposed services across internet-facing assets | Count of exposed ports/services | Attack surface scanner | Trend baseline |
| Third-Party Connection Count | Active connections to external parties | Count of integrations | Integration inventory | Documented |
| Data Exposure Score | Composite score of sensitive data accessibility | Weighted sum by classification | DLP, access analysis | Trend baseline |
| Shadow IT Count | Unsanctioned applications in use | Count of detected applications | CASB, network analysis | Minimised |
The external attack surface metric requires active scanning to enumerate exposed services. Tools such as Shodan, Censys, or dedicated attack surface management platforms provide this data. Track the ratio of intended versus unintended exposure.
Control Coverage Metrics
| Metric | Definition | Formula | Data Source | Benchmark |
|---|---|---|---|---|
| Framework Coverage | Percentage of framework controls implemented | (implemented / required) × 100 | GRC platform | Per framework |
| Control Effectiveness | Percentage of controls operating effectively | (effective / implemented) × 100 | Control testing | > 90% |
| Compensating Control Ratio | Percentage of controls using compensating measures | (compensating / total) × 100 | Control inventory | < 10% |
| Control Test Currency | Controls tested within required period | (current_tests / total_controls) × 100 | Testing records | 100% |
Framework coverage by common frameworks:
| Framework | Typical Control Count | Minimum Coverage Target |
|---|---|---|
| CIS Controls v8 (IG1) | 56 safeguards | 100% |
| CIS Controls v8 (IG2) | 130 safeguards | 90% |
| ISO 27001:2022 | 93 controls | 95% (certification) |
| NIST CSF 2.0 | 106 subcategories | 80% |
Implementation Group 1 (IG1) of CIS Controls represents essential cyber hygiene appropriate for organisations with limited security resources. IG2 adds controls for organisations with moderate resources and data sensitivity.
Residual Risk Metrics
| Metric | Definition | Formula | Data Source | Benchmark |
|---|---|---|---|---|
| Risk Score | Composite risk rating | likelihood × impact (quantified) | Risk register | Below tolerance |
| Risk Count by Level | Distribution across risk levels | Count per level | Risk register | Minimise high/critical |
| Risk Treatment Progress | Percentage of treatment plans on track | (on_track / total_plans) × 100 | Risk register | > 90% |
| Accepted Risk Value | Aggregate value of accepted risks | Sum of accepted risk values | Risk register | Below acceptance threshold |
| Risk Trend | Change in aggregate risk over period | current_score - previous_score | Risk register | Decreasing or stable |
Residual risk quantification requires consistent methodology. A 5×5 risk matrix with likelihood (1-5) and impact (1-5) produces scores from 1-25. Define thresholds: critical (20-25), high (15-19), medium (8-14), low (1-7). For quantitative risk analysis, express impact in monetary terms and likelihood as annual probability.
Quantitative risk calculation example: A ransomware scenario has 15% annual probability and estimated £400,000 impact (recovery costs, downtime, reputation). Annualised Loss Expectancy (ALE) = 0.15 × £400,000 = £60,000. Controls costing less than £60,000 annually that reduce either probability or impact are cost-effective.
Measurement Methodology
Metric validity depends on consistent measurement methodology. Define data sources, collection frequency, calculation procedures, and quality controls for each metric.
Data Collection Approaches
Automated collection retrieves data directly from source systems via APIs, log exports, or database queries. Automated collection ensures consistency and timeliness but requires integration effort and ongoing maintenance. SIEM platforms, vulnerability scanners, and identity providers typically support automated data export.
Manual collection involves human data entry from observations, interviews, or document review. Manual collection suits metrics without automated sources, such as control effectiveness assessments or qualitative risk ratings. Manual collection introduces subjectivity and delays; standardise collection procedures and train data collectors.
Derived metrics calculate from other metrics rather than raw data. Mean Time to Detect derives from incident records that themselves come from SIEM and ticketing integration. Document derivation chains to enable root cause analysis of metric anomalies.
Collection Frequency
| Metric Category | Minimum Collection Frequency | Reporting Frequency |
|---|---|---|
| Detection and response | Continuous | Weekly |
| Vulnerability | Weekly scan | Weekly |
| Access control | Daily | Monthly |
| Compliance | Per event | Monthly |
| Training | Per completion | Monthly |
| Risk | Per assessment | Quarterly |
Higher collection frequency enables trend analysis and anomaly detection but increases storage and processing requirements. Balance granularity against operational capacity. Organisations with limited resources should prioritise weekly vulnerability scans and monthly compliance rollups over continuous collection.
Data Quality Controls
Metric accuracy requires systematic quality controls:
Source validation confirms data sources provide complete and accurate information. Cross-reference counts between systems: does the asset inventory match the vulnerability scanner’s asset count? Discrepancies indicate coverage gaps or inventory inaccuracies.
Calculation verification ensures formulas produce expected results. Test calculations with known values. A detection rate calculation should yield 75% when 3 of 4 incidents were internally detected.
Trend analysis identifies anomalies requiring investigation. A sudden 50% drop in alert volume suggests monitoring failure rather than improved security. Establish thresholds for anomaly alerts on key metrics.
Definition consistency maintains stable metric definitions over time. Changing the definition of “incident” mid-year invalidates trend comparisons. Document definitions and version changes.
Reporting Formats
Security metrics serve different audiences requiring different reporting formats. Technical teams need detailed operational data. Leadership needs summarised risk posture. Auditors need compliance evidence.
Operational Dashboard
Operational dashboards provide real-time or near-real-time visibility for security operations teams. Design for at-a-glance status assessment with drill-down capability.
+-------------------------------------------------------------------------+| SECURITY OPERATIONS DASHBOARD |+-------------------------------------------------------------------------+| || DETECTION | RESPONSE | VULNERABILITIES || +------------------+ | +------------------+ | +-----------------+ || | MTTD: 4.2h | | | MTTR: 2.1h | | | Critical: 3 | || | [====> ] | | | [======> ] | | | [!!] | || | Target: 24h | | | Target: 4h | | | Target: 0 | || +------------------+ | +------------------+ | +-----------------+ || | | || Alerts: 847/day | Open: 12 | High: 47 || Incidents: 3 | P1: 1 | Overdue: 8 || Detection: 82% | Escalated: 2 | Coverage: 98% || | | |+------------------------+------------------------+-----------------------+| ALERT TREND (7 days) | INCIDENT STATUS || | || 1200 | * | +-----------------------------+ || 1000 | * * | | New | ### (3) | || 800 | * * * * | | Investigating | #### (4) | || 600 | * | | Contained | ## (2) | || 400 | | | Resolved | ### (3) | || +---+---+---+---+---+---+-- | +-----------------------------+ || M T W T F S S | || | |+-------------------------------------+-----------------------------------+| TOP ALERTS (24h) | RECENT INCIDENTS || | || 1. Brute force (127) | INC-2024-127: Phishing || 2. Malware detected (43) | INC-2024-126: Malware || 3. Policy violation (38) | INC-2024-125: Unauthorised || 4. Anomalous login (29) | access attempt || | |+-------------------------------------------------------------------------+Figure 2: Operational dashboard layout showing key metrics with visual status indicators
Dashboard design principles:
Place the most critical metrics in the upper-left quadrant where attention naturally focuses. Use colour coding consistently: green for within target, amber for approaching threshold, red for exceeded threshold. Provide context with targets, trends, and comparisons rather than raw numbers alone. Limit dashboard to 12-15 metrics to avoid cognitive overload.
Executive Summary
Executive summaries distil security posture for leadership audiences who need strategic insight without operational detail. Produce monthly or quarterly, depending on organisational cadence.
Structure executive summaries around:
Risk posture with aggregate risk score, trend direction, and count by severity level. Express in terms leadership understands: “3 high-severity risks require treatment decisions” rather than “mean risk score increased 0.4 points.”
Key incidents describing significant security events, their impact, and response effectiveness. Focus on business impact and lessons learned rather than technical details.
Compliance status showing certification currency, audit finding status, and regulatory obligations met or at risk. Flag upcoming deadlines and resource requirements.
Resource utilisation indicating whether security capacity matches demand. Highlight understaffing, tool gaps, or budget constraints affecting security posture.
Compliance Evidence Package
Compliance evidence packages support audits and regulatory examinations. Compile metric data with supporting documentation demonstrating control operation.
For each metric in scope:
Provide the metric definition, data source, and calculation methodology. Include raw data extracts from source systems with timestamps. Show trend data across the audit period. Document any anomalies and their explanations. Include screenshots or system reports as corroborating evidence.
Auditors require evidence of consistent measurement, not just point-in-time values. A 95% patch compliance rate means more when supported by weekly measurements over 12 months than a single end-of-period snapshot.
Reporting Cadence
| Report Type | Frequency | Audience | Content Focus |
|---|---|---|---|
| Operational dashboard | Real-time | Security team | Current status, active issues |
| Weekly summary | Weekly | Security management | Trends, incidents, priorities |
| Monthly report | Monthly | IT leadership | Posture summary, compliance status |
| Quarterly review | Quarterly | Executive leadership | Risk trends, strategic issues |
| Annual assessment | Annually | Board, governance | Year-over-year comparison, programme maturity |
Align reporting cadence with organisational governance cycles. If the board meets quarterly, produce board-level metrics quarterly. If project steering committees meet monthly, align compliance project updates to that cycle.
Benchmark Sources
External benchmarks contextualise internal metrics against peer organisations. Use benchmarks cautiously: organisational differences in size, sector, risk profile, and measurement methodology limit comparability.
Industry Benchmark Sources
| Source | Coverage | Access | Update Frequency |
|---|---|---|---|
| Verizon DBIR | Incident patterns, breach statistics | Public | Annual |
| IBM Cost of a Data Breach | Breach costs, response metrics | Public summary | Annual |
| SANS Security Awareness | Phishing, training metrics | Public summary | Annual |
| Ponemon Institute | Various security metrics | Sponsored research | Varies |
| CIS Benchmarks | Configuration compliance | Public | Per technology |
| FIRST EPSS | Vulnerability exploitation | Public | Daily |
The Verizon Data Breach Investigations Report provides sector-specific incident patterns useful for threat prioritisation. The IBM Cost of a Data Breach report offers financial impact benchmarks, though figures skew toward larger organisations with formal breach response.
Sector-Specific Sources
Mission-driven organisations benefit from sector-specific benchmarking where available:
CiviCERT shares threat intelligence and incident data among civil society organisations. Participation provides peer comparison on threats targeting the humanitarian and human rights sectors.
NetHope conducts periodic surveys of IT practices among international NGOs, including security metrics. Members access benchmark data for peer comparison.
BOND and similar national infrastructure bodies occasionally publish technology benchmarks for their membership, including security posture indicators.
Benchmark Interpretation
Apply benchmarks as directional indicators rather than absolute targets. A 4% phishing click rate appears poor against a 2% industry benchmark, but investigation may reveal the organisation runs more sophisticated simulations, operates in higher-risk contexts, or has different user populations.
Track internal trends as the primary performance indicator. Improvement from 8% to 4% click rate demonstrates programme effectiveness regardless of external benchmarks. Use external benchmarks to identify potential gaps warranting investigation, not to set targets.
Normalise benchmarks for organisational context. Per-employee metrics suit organisations of similar size. Per-asset metrics suit similar infrastructure complexity. Revenue-normalised metrics suit similar financial scale. A 50-person organisation cannot meaningfully compare absolute vulnerability counts against a 5,000-person organisation.
Metric Selection for Resource-Constrained Organisations
Organisations with limited security resources should focus on high-value metrics that drive action rather than comprehensive measurement programmes.
Essential Metrics (Minimum Viable Measurement)
| Metric | Rationale | Typical Source |
|---|---|---|
| MFA adoption rate | Highest-impact control visibility | Identity provider |
| Critical vulnerability count | Risk exposure indicator | Vulnerability scanner or manual |
| Phishing click rate | Human risk indicator | Simulation platform or manual |
| Backup success rate | Recovery capability | Backup system |
| Patch currency | Basic hygiene indicator | Patch management or manual |
These five metrics provide meaningful security visibility with minimal collection overhead. An organisation with a single IT person can track these monthly using built-in reporting from existing tools.
Scaling Measurement Capability
As security maturity increases, expand measurement in this sequence:
Stage 1 (single IT person): Essential metrics above, collected monthly, reviewed informally.
Stage 2 (small IT team): Add detection metrics (alert volume, incident count), access control metrics (privileged account ratio, dormant accounts), and compliance metrics (training completion, policy exceptions). Monthly reporting to management.
Stage 3 (dedicated security function): Full operational metrics with automated collection. Weekly operational reviews, monthly management reporting, quarterly executive summaries.
Stage 4 (mature security programme): Risk metrics, control effectiveness measurement, continuous monitoring dashboards, board-level risk reporting.
Avoid metric proliferation that consumes resources without driving improvement. Each metric should answer a specific question that influences decisions or demonstrates compliance. Remove metrics that no one reviews or acts upon.