Skip to main content

Security Metrics

Security metrics are quantitative measurements that track control effectiveness, risk posture, and compliance status across an organisation’s security programme. This reference provides indicator definitions, calculation formulas, data sources, and benchmark thresholds for metrics applicable to mission-driven organisations operating with varying security maturity levels.

Metric Category Taxonomy

Security metrics divide into three primary categories based on what they measure and who consumes them. Operational metrics track day-to-day security activities and support technical decision-making. Compliance metrics demonstrate adherence to regulatory requirements and contractual obligations. Risk metrics quantify exposure and inform strategic resource allocation.

+------------------------------------------------------------------+
| SECURITY METRICS TAXONOMY |
+------------------------------------------------------------------+
| |
| +------------------------+ |
| | OPERATIONAL | Technical teams |
| | METRICS | Daily/weekly review |
| | | |
| | - Detection | |
| | - Response | |
| | - Vulnerability | |
| | - Access control | |
| | - Availability | |
| +------------------------+ |
| | |
| v |
| +------------------------+ |
| | COMPLIANCE | Governance, auditors |
| | METRICS | Monthly/quarterly review |
| | | |
| | - Policy adherence | |
| | - Training completion | |
| | - Audit findings | |
| | - Certification | |
| +------------------------+ |
| | |
| v |
| +------------------------+ |
| | RISK | Leadership, board |
| | METRICS | Quarterly/annual review |
| | | |
| | - Exposure | |
| | - Control coverage | |
| | - Residual risk | |
| | - Trend indicators | |
| +------------------------+ |
| |
+------------------------------------------------------------------+

Figure 1: Security metric categories by audience and review frequency

The three categories interconnect through aggregation. Operational metrics feed into compliance metrics when aggregated over reporting periods. Compliance metrics contribute to risk metrics when mapped against control frameworks. A single underlying data point, such as an unpatched critical vulnerability, appears differently across categories: as a remediation target in operational metrics, as a policy deviation in compliance metrics, and as quantified exposure in risk metrics.

Operational Metrics

Operational metrics measure the execution of security activities. These metrics support capacity planning, process improvement, and technical decision-making. Data sources include security tools (SIEM, vulnerability scanners, EDR), identity platforms, and ticketing systems.

Detection Metrics

MetricDefinitionFormulaData SourceBenchmark
Mean Time to Detect (MTTD)Average elapsed time from threat occurrence to detectionSum of (detection_timestamp - occurrence_timestamp) / incident_countSIEM, EDR< 24 hours
Detection RatePercentage of threats detected by internal controls versus external notification(internally_detected / total_incidents) × 100Incident records> 80%
Alert VolumeTotal security alerts generated per periodCount of alerts per day/weekSIEMTrend baseline
Alert-to-Incident RatioProportion of alerts escalated to confirmed incidents(confirmed_incidents / total_alerts) × 100SIEM, ticketing5-15%
False Positive RatePercentage of alerts determined to be non-malicious(false_positives / total_alerts) × 100SIEM, analyst records< 70%
Coverage RatioPercentage of assets with active monitoring(monitored_assets / total_assets) × 100Asset inventory, SIEM> 95%

MTTD calculation example: An organisation experiences 12 security incidents in a quarter. Detection timestamps minus occurrence timestamps yield: 2h, 18h, 4h, 72h, 1h, 8h, 36h, 3h, 12h, 6h, 24h, 48h. MTTD = (2+18+4+72+1+8+36+3+12+6+24+48) / 12 = 19.5 hours.

The occurrence timestamp requires estimation for incidents without precise intrusion time. Use the earliest evidence of compromise from forensic analysis. Where forensic data is unavailable, use the midpoint between the last known-good state and detection.

Response Metrics

MetricDefinitionFormulaData SourceBenchmark
Mean Time to Respond (MTTR)Average elapsed time from detection to containmentSum of (containment_timestamp - detection_timestamp) / incident_countIncident records< 4 hours (critical)
Mean Time to RecoverAverage elapsed time from containment to normal operationsSum of (recovery_timestamp - containment_timestamp) / incident_countIncident records< 24 hours
Incident Closure RatePercentage of incidents closed within SLA(closed_within_SLA / total_incidents) × 100Ticketing system> 90%
Escalation RatePercentage of incidents escalated beyond initial response team(escalated_incidents / total_incidents) × 100Incident records< 20%
Repeat Incident RatePercentage of incidents with same root cause as previous incident(repeat_incidents / total_incidents) × 100Incident records< 10%

Response metrics require clear definitions of containment and recovery. Containment means the threat can no longer spread or cause additional damage, even if affected systems remain offline. Recovery means affected services operate normally with no ongoing restrictions.

Vulnerability Metrics

MetricDefinitionFormulaData SourceBenchmark
Vulnerability DensityVulnerabilities per assettotal_vulnerabilities / total_assetsVulnerability scanner< 5 per asset
Critical Vulnerability CountNumber of CVSS 9.0+ vulnerabilitiesCount where CVSS >= 9.0Vulnerability scanner0 (target)
Mean Time to Remediate (MTTR-V)Average days from discovery to remediationSum of (remediation_date - discovery_date) / vuln_countVulnerability scanner< 15 days (critical)
Patch Compliance RatePercentage of systems with current patches(patched_systems / total_systems) × 100Patch management> 95%
Vulnerability AgeAverage days vulnerabilities remain openSum of days_open / open_vuln_countVulnerability scanner< 30 days
Scan CoveragePercentage of assets scanned in period(scanned_assets / total_assets) × 100Vulnerability scanner100%

Vulnerability metrics should incorporate Exploit Prediction Scoring System (EPSS) alongside CVSS for prioritisation. EPSS provides probability of exploitation within 30 days, enabling risk-based remediation sequencing. A CVSS 7.5 vulnerability with 0.95 EPSS score warrants faster remediation than a CVSS 9.0 vulnerability with 0.02 EPSS score.

MTTR-V calculation by severity:

SeverityCVSS RangeTarget MTTR-VMaximum MTTR-V
Critical9.0 - 10.07 days15 days
High7.0 - 8.915 days30 days
Medium4.0 - 6.930 days90 days
Low0.1 - 3.990 days180 days

Access Control Metrics

MetricDefinitionFormulaData SourceBenchmark
Privileged Account RatioPrivileged accounts as percentage of total(privileged_accounts / total_accounts) × 100Identity platform< 5%
Orphaned Account CountActive accounts without valid ownerCount where owner_status = inactiveIdentity platform, HR system0
Dormant Account CountAccounts without login in 90 daysCount where last_login > 90 daysIdentity platform< 2% of accounts
MFA Adoption RateAccounts with MFA enabled(mfa_enabled_accounts / total_accounts) × 100Identity platform100%
Failed Authentication RateFailed logins as percentage of total(failed_logins / total_logins) × 100Identity platform< 5%
Access Review CompletionReviews completed by deadline(completed_reviews / required_reviews) × 100Access governance100%
Access Request Cycle TimeAverage days from request to provisioningSum of (provision_date - request_date) / request_countTicketing system< 2 days

The privileged account ratio benchmark of 5% assumes a standard organisational structure. Organisations with large technical teams or complex infrastructure legitimately have higher ratios. Calculate a baseline and track trend rather than comparing against external benchmarks.

Availability Metrics

MetricDefinitionFormulaData SourceBenchmark
Security Service UptimePercentage of time security services operational(total_minutes - downtime_minutes) / total_minutes × 100Monitoring platform> 99.9%
Backup Success RatePercentage of backups completed successfully(successful_backups / attempted_backups) × 100Backup system> 99%
Recovery Test SuccessPercentage of recovery tests meeting RTO(successful_tests / total_tests) × 100DR testing records100%

Compliance Metrics

Compliance metrics demonstrate adherence to policies, regulations, and contractual requirements. These metrics support audit preparation, regulatory reporting, and governance oversight. Data sources include policy management systems, training platforms, audit findings, and certification bodies.

Policy Adherence Metrics

MetricDefinitionFormulaData SourceBenchmark
Policy Exception CountActive approved policy exceptionsCount of approved exceptionsGRC platformTrend baseline
Exception AgeAverage days exceptions remain openSum of days_open / exception_countGRC platform< 180 days
Policy Violation RateDetected violations per periodviolations / reporting_periodDLP, access logs, auditTrend baseline
Control Implementation RatePercentage of required controls implemented(implemented_controls / required_controls) × 100GRC platform> 95%

Policy exceptions require scheduled review. Track exception renewal rate to identify controls that consistently require exceptions, indicating potential policy misalignment with operational reality.

Training Metrics

MetricDefinitionFormulaData SourceBenchmark
Training Completion RatePercentage completing required training(completed / required) × 100LMS100%
Training CurrencyPercentage with training completed within validity period(current / total) × 100LMS> 95%
Phishing Simulation Click RatePercentage clicking simulated phishing(clicked / recipients) × 100Phishing platform< 5%
Phishing Report RatePercentage reporting simulated phishing(reported / recipients) × 100Phishing platform> 70%
Time to Complete TrainingAverage days from assignment to completionSum of (completion_date - assignment_date) / countLMS< 14 days

Phishing metrics require careful interpretation. Click rates vary by simulation difficulty, department, and timing. Track trend over time with consistent simulation difficulty rather than comparing against external benchmarks. A 3% click rate on a sophisticated simulation indicates better awareness than a 2% click rate on an obvious simulation.

Audit Metrics

MetricDefinitionFormulaData SourceBenchmark
Open Audit FindingsCount of unresolved findingsCount where status = openAudit trackingTrend baseline
Finding Closure RateFindings closed within target date(closed_on_time / total_findings) × 100Audit tracking> 90%
Finding AgeAverage days findings remain openSum of days_open / finding_countAudit tracking< 90 days
Repeat FindingsFindings on same issue as prior auditCount where repeat = trueAudit tracking0
Critical Finding CountHigh-severity findings openCount where severity = criticalAudit tracking0

Audit finding severity typically follows a four-level scale: critical (immediate risk requiring executive escalation), high (significant risk requiring remediation within 30 days), medium (moderate risk with 90-day remediation), and low (minor risk with 180-day remediation).

Certification Metrics

MetricDefinitionFormulaData SourceBenchmark
Certification StatusCurrent state of each certificationEnumerationCertification registerAll current
Certification CoveragePercentage of operations under certification(certified_operations / total_operations) × 100Scope documentationPer requirement
Days to RecertificationDays until certification expiresexpiry_date - current_dateCertification register> 90 days
Nonconformity CountOpen nonconformities from certification auditsCount where status = openCertification body reports0 major

Risk Metrics

Risk metrics quantify security exposure for strategic decision-making. These metrics support resource allocation, risk acceptance decisions, and board reporting. Data sources include risk registers, control assessments, and aggregated operational data.

Exposure Metrics

MetricDefinitionFormulaData SourceBenchmark
Internet-Facing Asset CountAssets directly accessible from internetCount where internet_facing = trueAsset inventoryMinimised
External Attack SurfaceExposed services across internet-facing assetsCount of exposed ports/servicesAttack surface scannerTrend baseline
Third-Party Connection CountActive connections to external partiesCount of integrationsIntegration inventoryDocumented
Data Exposure ScoreComposite score of sensitive data accessibilityWeighted sum by classificationDLP, access analysisTrend baseline
Shadow IT CountUnsanctioned applications in useCount of detected applicationsCASB, network analysisMinimised

The external attack surface metric requires active scanning to enumerate exposed services. Tools such as Shodan, Censys, or dedicated attack surface management platforms provide this data. Track the ratio of intended versus unintended exposure.

Control Coverage Metrics

MetricDefinitionFormulaData SourceBenchmark
Framework CoveragePercentage of framework controls implemented(implemented / required) × 100GRC platformPer framework
Control EffectivenessPercentage of controls operating effectively(effective / implemented) × 100Control testing> 90%
Compensating Control RatioPercentage of controls using compensating measures(compensating / total) × 100Control inventory< 10%
Control Test CurrencyControls tested within required period(current_tests / total_controls) × 100Testing records100%

Framework coverage by common frameworks:

FrameworkTypical Control CountMinimum Coverage Target
CIS Controls v8 (IG1)56 safeguards100%
CIS Controls v8 (IG2)130 safeguards90%
ISO 27001:202293 controls95% (certification)
NIST CSF 2.0106 subcategories80%

Implementation Group 1 (IG1) of CIS Controls represents essential cyber hygiene appropriate for organisations with limited security resources. IG2 adds controls for organisations with moderate resources and data sensitivity.

Residual Risk Metrics

MetricDefinitionFormulaData SourceBenchmark
Risk ScoreComposite risk ratinglikelihood × impact (quantified)Risk registerBelow tolerance
Risk Count by LevelDistribution across risk levelsCount per levelRisk registerMinimise high/critical
Risk Treatment ProgressPercentage of treatment plans on track(on_track / total_plans) × 100Risk register> 90%
Accepted Risk ValueAggregate value of accepted risksSum of accepted risk valuesRisk registerBelow acceptance threshold
Risk TrendChange in aggregate risk over periodcurrent_score - previous_scoreRisk registerDecreasing or stable

Residual risk quantification requires consistent methodology. A 5×5 risk matrix with likelihood (1-5) and impact (1-5) produces scores from 1-25. Define thresholds: critical (20-25), high (15-19), medium (8-14), low (1-7). For quantitative risk analysis, express impact in monetary terms and likelihood as annual probability.

Quantitative risk calculation example: A ransomware scenario has 15% annual probability and estimated £400,000 impact (recovery costs, downtime, reputation). Annualised Loss Expectancy (ALE) = 0.15 × £400,000 = £60,000. Controls costing less than £60,000 annually that reduce either probability or impact are cost-effective.

Measurement Methodology

Metric validity depends on consistent measurement methodology. Define data sources, collection frequency, calculation procedures, and quality controls for each metric.

Data Collection Approaches

Automated collection retrieves data directly from source systems via APIs, log exports, or database queries. Automated collection ensures consistency and timeliness but requires integration effort and ongoing maintenance. SIEM platforms, vulnerability scanners, and identity providers typically support automated data export.

Manual collection involves human data entry from observations, interviews, or document review. Manual collection suits metrics without automated sources, such as control effectiveness assessments or qualitative risk ratings. Manual collection introduces subjectivity and delays; standardise collection procedures and train data collectors.

Derived metrics calculate from other metrics rather than raw data. Mean Time to Detect derives from incident records that themselves come from SIEM and ticketing integration. Document derivation chains to enable root cause analysis of metric anomalies.

Collection Frequency

Metric CategoryMinimum Collection FrequencyReporting Frequency
Detection and responseContinuousWeekly
VulnerabilityWeekly scanWeekly
Access controlDailyMonthly
CompliancePer eventMonthly
TrainingPer completionMonthly
RiskPer assessmentQuarterly

Higher collection frequency enables trend analysis and anomaly detection but increases storage and processing requirements. Balance granularity against operational capacity. Organisations with limited resources should prioritise weekly vulnerability scans and monthly compliance rollups over continuous collection.

Data Quality Controls

Metric accuracy requires systematic quality controls:

Source validation confirms data sources provide complete and accurate information. Cross-reference counts between systems: does the asset inventory match the vulnerability scanner’s asset count? Discrepancies indicate coverage gaps or inventory inaccuracies.

Calculation verification ensures formulas produce expected results. Test calculations with known values. A detection rate calculation should yield 75% when 3 of 4 incidents were internally detected.

Trend analysis identifies anomalies requiring investigation. A sudden 50% drop in alert volume suggests monitoring failure rather than improved security. Establish thresholds for anomaly alerts on key metrics.

Definition consistency maintains stable metric definitions over time. Changing the definition of “incident” mid-year invalidates trend comparisons. Document definitions and version changes.

Reporting Formats

Security metrics serve different audiences requiring different reporting formats. Technical teams need detailed operational data. Leadership needs summarised risk posture. Auditors need compliance evidence.

Operational Dashboard

Operational dashboards provide real-time or near-real-time visibility for security operations teams. Design for at-a-glance status assessment with drill-down capability.

+-------------------------------------------------------------------------+
| SECURITY OPERATIONS DASHBOARD |
+-------------------------------------------------------------------------+
| |
| DETECTION | RESPONSE | VULNERABILITIES |
| +------------------+ | +------------------+ | +-----------------+ |
| | MTTD: 4.2h | | | MTTR: 2.1h | | | Critical: 3 | |
| | [====> ] | | | [======> ] | | | [!!] | |
| | Target: 24h | | | Target: 4h | | | Target: 0 | |
| +------------------+ | +------------------+ | +-----------------+ |
| | | |
| Alerts: 847/day | Open: 12 | High: 47 |
| Incidents: 3 | P1: 1 | Overdue: 8 |
| Detection: 82% | Escalated: 2 | Coverage: 98% |
| | | |
+------------------------+------------------------+-----------------------+
| ALERT TREND (7 days) | INCIDENT STATUS |
| | |
| 1200 | * | +-----------------------------+ |
| 1000 | * * | | New | ### (3) | |
| 800 | * * * * | | Investigating | #### (4) | |
| 600 | * | | Contained | ## (2) | |
| 400 | | | Resolved | ### (3) | |
| +---+---+---+---+---+---+-- | +-----------------------------+ |
| M T W T F S S | |
| | |
+-------------------------------------+-----------------------------------+
| TOP ALERTS (24h) | RECENT INCIDENTS |
| | |
| 1. Brute force (127) | INC-2024-127: Phishing |
| 2. Malware detected (43) | INC-2024-126: Malware |
| 3. Policy violation (38) | INC-2024-125: Unauthorised |
| 4. Anomalous login (29) | access attempt |
| | |
+-------------------------------------------------------------------------+

Figure 2: Operational dashboard layout showing key metrics with visual status indicators

Dashboard design principles:

Place the most critical metrics in the upper-left quadrant where attention naturally focuses. Use colour coding consistently: green for within target, amber for approaching threshold, red for exceeded threshold. Provide context with targets, trends, and comparisons rather than raw numbers alone. Limit dashboard to 12-15 metrics to avoid cognitive overload.

Executive Summary

Executive summaries distil security posture for leadership audiences who need strategic insight without operational detail. Produce monthly or quarterly, depending on organisational cadence.

Structure executive summaries around:

Risk posture with aggregate risk score, trend direction, and count by severity level. Express in terms leadership understands: “3 high-severity risks require treatment decisions” rather than “mean risk score increased 0.4 points.”

Key incidents describing significant security events, their impact, and response effectiveness. Focus on business impact and lessons learned rather than technical details.

Compliance status showing certification currency, audit finding status, and regulatory obligations met or at risk. Flag upcoming deadlines and resource requirements.

Resource utilisation indicating whether security capacity matches demand. Highlight understaffing, tool gaps, or budget constraints affecting security posture.

Compliance Evidence Package

Compliance evidence packages support audits and regulatory examinations. Compile metric data with supporting documentation demonstrating control operation.

For each metric in scope:

Provide the metric definition, data source, and calculation methodology. Include raw data extracts from source systems with timestamps. Show trend data across the audit period. Document any anomalies and their explanations. Include screenshots or system reports as corroborating evidence.

Auditors require evidence of consistent measurement, not just point-in-time values. A 95% patch compliance rate means more when supported by weekly measurements over 12 months than a single end-of-period snapshot.

Reporting Cadence

Report TypeFrequencyAudienceContent Focus
Operational dashboardReal-timeSecurity teamCurrent status, active issues
Weekly summaryWeeklySecurity managementTrends, incidents, priorities
Monthly reportMonthlyIT leadershipPosture summary, compliance status
Quarterly reviewQuarterlyExecutive leadershipRisk trends, strategic issues
Annual assessmentAnnuallyBoard, governanceYear-over-year comparison, programme maturity

Align reporting cadence with organisational governance cycles. If the board meets quarterly, produce board-level metrics quarterly. If project steering committees meet monthly, align compliance project updates to that cycle.

Benchmark Sources

External benchmarks contextualise internal metrics against peer organisations. Use benchmarks cautiously: organisational differences in size, sector, risk profile, and measurement methodology limit comparability.

Industry Benchmark Sources

SourceCoverageAccessUpdate Frequency
Verizon DBIRIncident patterns, breach statisticsPublicAnnual
IBM Cost of a Data BreachBreach costs, response metricsPublic summaryAnnual
SANS Security AwarenessPhishing, training metricsPublic summaryAnnual
Ponemon InstituteVarious security metricsSponsored researchVaries
CIS BenchmarksConfiguration compliancePublicPer technology
FIRST EPSSVulnerability exploitationPublicDaily

The Verizon Data Breach Investigations Report provides sector-specific incident patterns useful for threat prioritisation. The IBM Cost of a Data Breach report offers financial impact benchmarks, though figures skew toward larger organisations with formal breach response.

Sector-Specific Sources

Mission-driven organisations benefit from sector-specific benchmarking where available:

CiviCERT shares threat intelligence and incident data among civil society organisations. Participation provides peer comparison on threats targeting the humanitarian and human rights sectors.

NetHope conducts periodic surveys of IT practices among international NGOs, including security metrics. Members access benchmark data for peer comparison.

BOND and similar national infrastructure bodies occasionally publish technology benchmarks for their membership, including security posture indicators.

Benchmark Interpretation

Apply benchmarks as directional indicators rather than absolute targets. A 4% phishing click rate appears poor against a 2% industry benchmark, but investigation may reveal the organisation runs more sophisticated simulations, operates in higher-risk contexts, or has different user populations.

Track internal trends as the primary performance indicator. Improvement from 8% to 4% click rate demonstrates programme effectiveness regardless of external benchmarks. Use external benchmarks to identify potential gaps warranting investigation, not to set targets.

Normalise benchmarks for organisational context. Per-employee metrics suit organisations of similar size. Per-asset metrics suit similar infrastructure complexity. Revenue-normalised metrics suit similar financial scale. A 50-person organisation cannot meaningfully compare absolute vulnerability counts against a 5,000-person organisation.

Metric Selection for Resource-Constrained Organisations

Organisations with limited security resources should focus on high-value metrics that drive action rather than comprehensive measurement programmes.

Essential Metrics (Minimum Viable Measurement)

MetricRationaleTypical Source
MFA adoption rateHighest-impact control visibilityIdentity provider
Critical vulnerability countRisk exposure indicatorVulnerability scanner or manual
Phishing click rateHuman risk indicatorSimulation platform or manual
Backup success rateRecovery capabilityBackup system
Patch currencyBasic hygiene indicatorPatch management or manual

These five metrics provide meaningful security visibility with minimal collection overhead. An organisation with a single IT person can track these monthly using built-in reporting from existing tools.

Scaling Measurement Capability

As security maturity increases, expand measurement in this sequence:

Stage 1 (single IT person): Essential metrics above, collected monthly, reviewed informally.

Stage 2 (small IT team): Add detection metrics (alert volume, incident count), access control metrics (privileged account ratio, dormant accounts), and compliance metrics (training completion, policy exceptions). Monthly reporting to management.

Stage 3 (dedicated security function): Full operational metrics with automated collection. Weekly operational reviews, monthly management reporting, quarterly executive summaries.

Stage 4 (mature security programme): Risk metrics, control effectiveness measurement, continuous monitoring dashboards, board-level risk reporting.

Avoid metric proliferation that consumes resources without driving improvement. Each metric should answer a specific question that influences decisions or demonstrates compliance. Remove metrics that no one reviews or acts upon.

See also