Skip to main content

Threat Intelligence

Threat intelligence transforms raw data about adversaries, their methods, and their targets into actionable knowledge that improves defensive decisions. The discipline encompasses identifying relevant threat actors, understanding their capabilities and intentions, tracking their infrastructure and tooling, and integrating this understanding into detection and response capabilities. For mission-driven organisations, threat intelligence addresses the specific challenge that generic security guidance fails to account for the distinct threat landscape facing charities, humanitarian organisations, and advocacy groups.

Threat Intelligence
Processed, contextualised information about threats that enables informed decisions. Distinguished from raw threat data by analysis that establishes relevance, reliability, and actionability.
Indicator of Compromise (IOC)
Observable artefact that indicates potential malicious activity. Examples include IP addresses, domain names, file hashes, email addresses, and behavioural patterns. IOCs represent the technical evidence of threat actor activity.
Tactics, Techniques, and Procedures (TTPs)
Behavioural patterns describing how threat actors operate. Tactics describe high-level goals, techniques describe methods to achieve those goals, and procedures describe specific implementations. TTPs are more durable than IOCs because adversaries change infrastructure more readily than behaviour.
Intelligence Requirement
A specific question that intelligence collection and analysis should answer. Requirements drive collection priorities and ensure intelligence production aligns with organisational needs rather than available data.
Threat Actor
An entity with capability and intent to cause harm. Actors range from opportunistic criminals to nation-state groups, each with distinct motivations, resources, and targeting patterns.
Campaign
A coordinated series of activities by a threat actor toward a specific objective. Campaigns have defined timeframes, targets, and methods, distinguishing them from ongoing opportunistic activity.

Intelligence Lifecycle

The intelligence lifecycle provides a systematic approach to converting raw data into actionable intelligence. Each phase builds on the previous, creating a feedback loop where intelligence consumption informs future collection priorities.

+------------------------------------------------------------------+
| INTELLIGENCE LIFECYCLE |
+------------------------------------------------------------------+
| |
| +-------------+ +-------------+ +-------------+ |
| | | | | | | |
| | DIRECTION +---->+ COLLECTION +---->+ PROCESSING | |
| | | | | | | |
| +------+------+ +-------------+ +------+------+ |
| ^ | |
| | v |
| | +------+------+ |
| | | | |
| | | ANALYSIS | |
| | | | |
| | +------+------+ |
| | | |
| | v |
| +------+------+ +------+------+ |
| | | | | |
| | FEEDBACK |<------------------------+ DISSEMINATE | |
| | | | | |
| +-------------+ +-------------+ |
| |
+------------------------------------------------------------------+

Figure 1: Intelligence lifecycle phases showing the continuous feedback loop

Direction establishes intelligence requirements based on organisational needs. Requirements take the form of specific questions: “Which threat actors target organisations in our sector?” or “What infrastructure do credential phishing campaigns against humanitarian organisations use?” Effective requirements are answerable through collection, specific enough to guide analysis, and tied to decisions the organisation needs to make. Poorly formed requirements such as “What threats exist?” produce unfocused collection and generic analysis.

Requirements derive from several sources. Security leadership identifies strategic questions about the threat landscape. Security operations teams identify tactical needs for detection improvement. Incident responders identify specific questions arising from active investigations. Programme staff in high-risk contexts identify operational questions about threats to their activities. Each source produces different requirement types with different time horizons and specificity levels.

Collection gathers raw data from sources that might answer intelligence requirements. Collection sources span technical feeds providing IOCs, human intelligence from sector networks, open source research on threat actors, and commercial intelligence services. Effective collection balances breadth against focus, gathering sufficient data to answer requirements without overwhelming processing capacity. Collection planning maps sources to requirements, identifying gaps where no source addresses a requirement and redundancies where multiple sources provide the same information.

Processing converts raw collected data into formats suitable for analysis. Processing includes normalising data formats, deduplicating entries, enriching indicators with additional context, validating source reliability, and storing data for retrieval. A domain name collected from a phishing report requires DNS resolution to identify IP addresses, WHOIS lookup to identify registration patterns, and correlation with existing intelligence to identify known threat actor infrastructure. Processing transforms isolated data points into connected, queryable information.

Analysis examines processed data to produce intelligence that answers requirements. Analysis involves identifying patterns across data points, attributing activity to threat actors, assessing actor capabilities and intentions, evaluating the reliability and relevance of information, and drawing conclusions that inform decisions. Analysis distinguishes intelligence from data aggregation. Raw IOC feeds provide data; analysis determines which IOCs represent threats to a specific organisation and what defensive actions those threats warrant.

Dissemination delivers intelligence to consumers in formats appropriate to their needs and clearances. Strategic intelligence for leadership requires executive summaries emphasising business impact. Tactical intelligence for security operations requires technical detail enabling detection rule creation. Operational intelligence for field staff requires actionable guidance without technical jargon. Dissemination timing matters: intelligence about an active campaign targeting the organisation requires immediate distribution; analysis of long-term threat trends supports quarterly reporting.

Feedback from intelligence consumers informs future direction. Feedback identifies which intelligence proved useful, which requirements remain unanswered, and what new questions arose from previous intelligence. Without feedback, the cycle produces intelligence that may not address actual needs. Formalised feedback mechanisms ensure consumer input reaches analysts and shapes collection priorities.

Intelligence Sources

Intelligence sources vary in accessibility, cost, reliability, and the type of intelligence they provide. Effective programmes combine multiple source types to address different requirements.

Open Source Intelligence

Open source intelligence (OSINT) derives from publicly available information. OSINT sources include security researcher blogs and publications, vendor threat reports, social media monitoring, paste sites where actors leak data, code repositories containing malicious tools, and dark web forums accessible without special access. OSINT provides substantial value at minimal cost but requires analytical effort to separate signal from noise.

Security vendor reports represent a primary OSINT source. Vendors including Mandiant, CrowdStrike, Recorded Future, and Cisco Talos publish detailed analyses of threat actors and campaigns. These reports provide TTP information, IOCs, and attribution analysis. Vendor reports favour threats affecting their customer base, which skews toward large enterprises in wealthy countries. Threats targeting mission-driven organisations receive less vendor attention, creating gaps that sector-specific intelligence sharing must address.

Security researcher publications on platforms such as Twitter/X, Mastodon, and personal blogs often surface emerging threats before vendor reports. Researchers share IOCs from investigations, document new malware families, and attribute campaigns to threat actors. Following researchers who focus on relevant threat types provides early warning of emerging threats. The informal nature of researcher publications requires careful evaluation of reliability and potential for misinformation.

Paste sites and code repositories contain data from breaches, leaked credentials, and malware source code. Monitoring these sources identifies when organisational data appears in breaches and tracks threat actor tooling evolution. Automated monitoring services reduce the manual effort of checking multiple sites.

Commercial Intelligence Services

Commercial threat intelligence services provide curated intelligence feeds, analyst support, and platform capabilities. Services range from IOC feeds costing hundreds of dollars annually to comprehensive platforms with dedicated analyst support costing over $100,000 annually. Commercial services reduce internal analytical burden but introduce vendor dependency and recurring costs.

IOC feeds provide machine-readable indicators for integration with security tools. Feed quality varies substantially. High-quality feeds provide contextualised indicators with confidence scores, actor attribution, and relevance tagging. Low-quality feeds provide bulk indicators without context, generating false positives and analyst fatigue. Evaluating feed quality requires comparing feed indicators against known threats and measuring detection effectiveness.

Intelligence platforms combine feeds with analytical tools, enabling correlation across sources, tracking of threat actors, and custom alerting. Platforms from Recorded Future, Mandiant Advantage, and similar providers offer varying capabilities and price points. Platform value depends on having sufficient analytical capacity to use platform features; purchasing an advanced platform without dedicated analysts produces expensive shelfware.

Analyst services provide human intelligence support. Services range from access to analyst queries to dedicated analysts supporting the organisation. Analyst services suit organisations lacking internal intelligence expertise, providing expert interpretation of threat data relevant to the organisation’s context.

Sector Information Sharing

Sector-specific information sharing provides intelligence about threats targeting similar organisations. Information Sharing and Analysis Centres (ISACs) and sector coordination bodies facilitate sharing among members. For mission-driven organisations, relevant sharing communities include the NGO-ISAC, CiviCERT, and regional humanitarian coordination mechanisms.

The NGO-ISAC provides threat intelligence specific to non-governmental organisations, including indicators from incidents affecting members, analysis of campaigns targeting the sector, and coordination during active threats. Membership provides access to intelligence unavailable through commercial sources because it derives from incidents affecting similar organisations.

CiviCERT coordinates security information among civil society organisations, providing incident response support and threat intelligence. CiviCERT focuses on threats from state actors targeting human rights organisations, journalists, and activists, addressing threats that commercial intelligence services underserve.

Humanitarian coordination mechanisms including the Inter-Agency Standing Committee and cluster coordination systems occasionally address digital threats. These mechanisms provide limited technical intelligence but offer context about threats affecting humanitarian operations in specific regions.

Sector sharing effectiveness depends on member participation. Sharing is reciprocal: organisations receiving intelligence should contribute intelligence from their own incidents. Reluctance to share incident information limits the collective intelligence available to all members.

+------------------------------------------------------------------+
| INTELLIGENCE SOURCE EVALUATION |
+------------------------------------------------------------------+
| |
| Source Reliability Cost Coverage |
| ---------------------------------------------------------------- |
| |
| Vendor threat reports High Free General |
| Security researchers Variable Free Emerging |
| Commercial IOC feeds Variable $$$ Broad |
| Intelligence platforms High $$$$ Comprehensive |
| NGO-ISAC High $ Sector |
| CiviCERT High Free Civil society |
| Paste site monitoring Low Free/$ Breach data |
| Dark web monitoring Variable $$ Criminal |
| |
| Reliability scale: How trustworthy is the intelligence? |
| Cost scale: Free, $ (<$1k), $$ (<$10k), $$$ (<$50k), $$$$ (>$50k)|
| Coverage: What threat types does the source address? |
| |
+------------------------------------------------------------------+

Figure 2: Intelligence source evaluation comparing reliability, cost, and coverage

Internal Intelligence

Internal intelligence derives from the organisation’s own security telemetry. Logs from security tools, incident investigations, and user reports contain intelligence about threats actually targeting the organisation. Internal intelligence has the highest relevance but requires analytical capacity to extract insights from operational data.

Security tool logs capture attempted attacks, blocked malware, and suspicious activity. Firewall logs reveal scanning and exploitation attempts. Email security logs reveal phishing campaigns. Endpoint detection logs reveal malware targeting endpoints. Aggregating and analysing these logs identifies patterns indicating targeted campaigns versus opportunistic attacks.

Incident investigations produce detailed intelligence about successful attacks. Post-incident analysis identifies attacker TTPs, infrastructure, and objectives. Documenting incident intelligence enables detection of similar attacks in the future and contributes to sector sharing.

User reports of suspicious emails, calls, and contacts provide intelligence about social engineering targeting the organisation. Users encounter attacks that technical controls miss. Encouraging and analysing user reports reveals threat actor targeting patterns and pretexts.

Sector-Specific Threats

Mission-driven organisations face a distinct threat landscape shaped by their work, data holdings, operating contexts, and public profiles. Understanding sector-specific threats enables prioritised defence against the most relevant actors.

Nation-State Targeting

Nation-state actors target mission-driven organisations for several reasons. Organisations working on human rights documentation, election monitoring, or governance reform threaten authoritarian regimes and attract state-sponsored targeting. Organisations operating in conflict zones possess information about parties to conflicts. Organisations providing services to refugees and displaced populations hold data that state actors seek for immigration enforcement or persecution of returnees.

Nation-state actors possess sophisticated capabilities including custom malware, zero-day exploits, and extensive infrastructure. They conduct long-term operations against targets, maintaining persistent access to compromised networks for months or years. Attribution to nation-state actors remains challenging; campaigns may operate through contractors or criminal proxies that obscure state involvement.

Organisations should assess nation-state threat relevance based on their activities. An organisation documenting human rights abuses by a specific government faces direct targeting risk from that government’s intelligence services. An organisation providing humanitarian services without political dimensions faces lower nation-state risk but may still encounter state interest in beneficiary data.

Documented nation-state campaigns against civil society organisations include APT28 (Russia) targeting Ukrainian civil society, APT42 (Iran) targeting human rights organisations, and various Chinese groups targeting Tibetan and Uyghur organisations. Tracking these known actors and their TTPs enables detection of their activity.

Hacktivism and Ideological Actors

Hacktivist groups target organisations based on ideological opposition to their mission or activities. Targets include organisations perceived as supporting unpopular governments, religious groups, or political positions. Hacktivist attacks typically aim for disruption and public embarrassment rather than persistent access.

Hacktivist capabilities vary widely. Some groups possess sophisticated technical skills; others rely on commodity tools and publicly available exploits. Common hacktivist tactics include website defacement, distributed denial of service (DDoS) attacks, and data theft followed by public leaks.

DDoS attacks represent a frequent hacktivist tactic. Attacks flood organisational web infrastructure with traffic, causing service outages. DDoS services available for hire reduce the technical barrier to launching attacks, enabling ideologically motivated individuals to disrupt targets. Organisations should assess DDoS risk based on public profile and controversial activities.

Data leaks aim to embarrass organisations through selective or manipulated disclosure of internal communications. Leaked data may be authentic, selectively edited to mislead, or fabricated entirely. Organisations handling sensitive data should consider how leaked information might be weaponised.

Criminal Actors

Criminal threat actors target organisations for financial gain through ransomware, business email compromise, payment fraud, and data theft for sale. Criminal targeting is largely opportunistic, based on perceived ability to pay ransoms and security posture rather than organisational mission.

Ransomware groups encrypt organisational data and demand payment for decryption. Ransomware attacks against mission-driven organisations have increased, with attackers perceiving that service disruption pressures organisations to pay. Charities including the Scottish Environment Protection Agency and humanitarian organisations have suffered ransomware attacks causing substantial operational disruption.

Business email compromise (BEC) involves attackers impersonating executives or partners to redirect payments. BEC targeting mission-driven organisations often exploits grant payment processes or vendor payments. Attackers research organisational structure and payment processes before launching targeted social engineering.

Criminal actors typically invest less in individual targets than nation-state actors but compensate through volume. Automated scanning identifies vulnerable systems; successful initial access may be sold to other criminal groups for exploitation. The criminal ecosystem enables specialisation: initial access brokers, ransomware operators, and money laundering services operate as distinct businesses.

Insider Threats

Insider threats arise from individuals with legitimate access who misuse that access. Insiders may act from financial motivation, ideological disagreement, coercion, or negligence. Insider threats are particularly relevant for organisations handling sensitive beneficiary data or operating in high-risk contexts.

Malicious insiders deliberately misuse access. A staff member ideologically opposed to the organisation’s mission might leak sensitive information. A staff member under financial pressure might sell access credentials to external actors. A staff member coerced by a government might provide information about beneficiaries.

Negligent insiders cause harm through carelessness rather than intent. Sending sensitive documents to wrong recipients, falling for phishing attacks, or misconfiguring systems can enable external compromise. Negligent insider incidents often cause the same harm as malicious actions despite different motivations.

Detecting insider threats requires monitoring user behaviour for anomalies while respecting privacy and avoiding surveillance culture. The balance between security monitoring and organisational culture requires careful consideration, particularly for organisations advocating against surveillance.

Indicator of Compromise Management

Indicators of compromise provide technical evidence enabling detection of threat actor activity. Effective IOC management involves collecting indicators from intelligence sources, evaluating their quality and relevance, integrating them into detection systems, and retiring them when no longer useful.

Indicator Types

IOCs span multiple technical domains, each with different detection mechanisms and decay rates.

Network indicators identify threat actor infrastructure. IP addresses indicate servers used for command and control, malware distribution, or data exfiltration. Domain names indicate attacker-controlled sites. URLs indicate specific malicious resources. Network indicators decay quickly as attackers rotate infrastructure; an IP address may be useful for days to weeks before the attacker abandons it.

File indicators identify malicious files. Cryptographic hashes (MD5, SHA-1, SHA-256) uniquely identify specific files. File names and paths indicate common malware installation locations. File indicators decay at varying rates; a hash identifies a specific malware sample permanently, but attackers trivially modify files to change hashes.

Email indicators identify phishing campaigns. Sender addresses, subject lines, and attachment names enable detection of ongoing campaigns. Email header analysis reveals sending infrastructure. Email indicators support both blocking and user education.

Behavioural indicators describe attacker actions rather than specific artefacts. Command sequences, file access patterns, and network traffic characteristics indicate malicious activity regardless of specific infrastructure. Behavioural indicators resist attacker adaptation better than artefact-based indicators because changing behaviour is harder than changing infrastructure.

+------------------------------------------------------------------+
| IOC INTEGRATION ARCHITECTURE |
+------------------------------------------------------------------+
| |
| +------------------+ |
| | Intelligence | |
| | Sources | |
| | | |
| | - Commercial | |
| | - OSINT | +------------------+ |
| | - Sector sharing +---->| | |
| | - Internal | | Threat Intel | |
| +------------------+ | Platform (TIP) | |
| | | |
| | - Normalisation | |
| | - Deduplication | |
| | - Enrichment | |
| | - Scoring | |
| +--------+---------+ |
| | |
| +-----------------------+-----------------------+ |
| | | | |
| v v v |
| +---------+--------+ +---------+--------+ +---------+------+ |
| | | | | | | |
| | SIEM | | Firewall/Proxy | | Email Security | |
| | | | | | | |
| | Detection rules | | Block lists | | Block lists | |
| | Correlation | | Alert rules | | Quarantine | |
| | | | | | | |
| +------------------+ +------------------+ +-----------------+ |
| | | | |
| +-----------------------+-----------------------+ |
| | |
| v |
| +--------+---------+ |
| | | |
| | Security Ops | |
| | (Alert triage) | |
| | | |
| +------------------+ |
| |
+------------------------------------------------------------------+

Figure 3: IOC integration flow from sources through platform to detection systems

Indicator Evaluation

Not all indicators warrant integration into detection systems. Evaluation considers relevance, reliability, and actionability.

Relevance assesses whether the indicator addresses threats facing the organisation. An indicator for malware targeting industrial control systems has no relevance for an organisation without such systems. Relevance filtering prevents alert fatigue from indicators that cannot indicate actual threats to the environment.

Reliability assesses confidence that the indicator accurately identifies malicious activity. Indicators from incident investigations have high reliability. Indicators from automated feeds without validation have lower reliability. Low-reliability indicators generate false positives, consuming analyst time on investigating legitimate activity.

Actionability assesses whether detection of the indicator enables useful response. An indicator is actionable if detection triggers a defined response: blocking, alerting, investigation, or containment. Indicators without defined response actions produce alerts without outcomes.

Indicator scoring quantifies evaluation dimensions. Scoring systems assign numeric values for relevance, reliability, and actionability, producing composite scores that guide integration priorities. High-scoring indicators receive immediate integration; low-scoring indicators may be stored for correlation without active detection.

Integration and Automation

Integrating indicators into detection systems requires technical mechanisms and operational processes. Technical integration pushes indicators to detection tools; operational processes ensure alerts receive appropriate response.

Threat intelligence platforms (TIPs) aggregate indicators from multiple sources, normalise formats, and distribute to detection systems. TIP options range from open source platforms such as MISP and OpenCTI to commercial platforms such as ThreatConnect and Anomali. Platform selection depends on integration requirements with existing security tools and analytical capability.

STIX (Structured Threat Information Expression) and TAXII (Trusted Automated Exchange of Intelligence Information) provide standardised formats and transport protocols for threat intelligence sharing. STIX defines how to represent threat information including indicators, threat actors, and campaigns. TAXII defines how to transport STIX content between systems. Tools supporting STIX/TAXII enable interoperability across intelligence sources and detection systems.

Automation reduces manual effort in indicator processing. Automated workflows can ingest feeds, score indicators, and push to detection systems without analyst intervention for routine indicators. Automation frees analyst time for evaluating complex indicators and producing analytical products.

Indicator Lifecycle

Indicators have lifecycles from creation through retirement. New indicators require rapid integration to detect ongoing campaigns. Aging indicators require review to determine continued relevance. Retired indicators require removal to prevent detection system bloat.

Decay models estimate how long indicators remain useful. Network infrastructure indicators typically decay within days to weeks. File hash indicators may remain useful for months if malware remains in use. Behavioural indicators may remain useful for years if they describe fundamental attacker techniques.

Periodic review identifies indicators for retirement. Review criteria include time since last detection, source assessment updates, and campaign status. Indicators that have not triggered detections and are associated with concluded campaigns warrant retirement. Automated decay based on indicator age provides baseline cleanup; manual review addresses indicators requiring judgment.

Threat Actor Profiling

Threat actor profiling develops structured understanding of adversaries that enables anticipating their actions and prioritising defences. Profiles describe actor identity, motivation, capability, and targeting patterns.

+-----------------------------------------------------------------------------------------------+
| THREAT ACTOR TAXONOMY |
+-----------------------------------------------------------------------------------------------+
| |
| THREAT ACTORS |
| | |
| +-------------+-------------+-------------+-------------+ |
| | | | | | |
| v v v v v |
| |
| +-------------+-------------+-------------+-------------+-------------+ |
| | | | | | | |
| CATEGORY | Nation | Criminal | Hacktivist | Insider | Terrorist | |
| | State | | | | | |
| | | | | | | |
| +-------------+-------------+-------------+-------------+-------------+ |
| | | | | | |
| v v v v v |
| +-------------+-------------+-------------+-------------+-------------+ |
| | | | | | | |
| | Espionage | Ransomware | Ideological | Malicious | Disruption | |
| | | | | | | |
| MOTIVATION | Disruption | BEC | Attention | Negligent | Fear | |
| | | | | | | |
| | Destruction | Fraud | | | | |
| | | | | | | |
| | | Data theft | | | | |
| | | | | | | |
| +-------------+-------------+-------------+-------------+-------------+ |
| |
+-----------------------------------------------------------------------------------------------+

Figure 4: Threat actor taxonomy showing categories and primary motivations

Profile Components

Actor identification establishes naming conventions and tracks aliases. The security community uses multiple naming schemes for the same actors; one group may be known as APT28 (Mandiant), Fancy Bear (CrowdStrike), and Sofacy (Kaspersky). Tracking aliases enables correlating intelligence across sources using different names.

Motivation explains why the actor conducts operations. Motivations include espionage (obtaining information), disruption (impairing operations), destruction (causing permanent damage), financial gain, and ideology. Understanding motivation enables predicting likely actions; an actor motivated by espionage will prioritise persistent access while an actor motivated by disruption will prioritise visible impact.

Capability describes what the actor can do. Capability assessment considers technical sophistication of tools, operational security practices, resources available for sustained operations, and demonstrated ability to achieve objectives. High-capability actors develop custom malware and exploit zero-day vulnerabilities; lower-capability actors rely on commodity tools and known vulnerabilities.

Targeting patterns describe who the actor attacks and how they select targets. Patterns include sectors, geographies, organisation types, and individual roles. An actor consistently targeting finance staff with spear-phishing demonstrates a targeting pattern useful for prioritising defences.

Infrastructure describes technical resources the actor uses. Infrastructure includes command and control servers, malware distribution sites, phishing domains, and email sending infrastructure. Tracking actor infrastructure enables blocking known resources and identifying new infrastructure through pattern matching.

TTPs describe how the actor operates. TTP documentation uses frameworks such as MITRE ATT&CK to categorise techniques in standardised vocabulary. ATT&CK describes techniques across the attack lifecycle from initial access through impact, enabling systematic comparison of actor behaviours and mapping defences to techniques.

Prioritisation

Not all threat actors warrant equal defensive attention. Prioritisation focuses resources on actors most likely to target the organisation and capable of causing significant harm.

Targeting likelihood assesses probability the actor will attack the organisation. Factors include documented targeting of similar organisations, geographic overlap between actor interests and organisational operations, and organisational activities that might attract actor attention. An organisation documenting human rights abuses in a specific country faces high targeting likelihood from that country’s intelligence services.

Impact potential assesses harm the actor could cause if successful. Factors include actor capability, organisational vulnerability to actor techniques, and consequences of successful compromise. A capable actor targeting sensitive beneficiary data presents higher impact potential than a low-capability actor targeting the public website.

Combining likelihood and impact produces prioritised actor lists. High-priority actors warrant specific defensive measures, detection rules for known TTPs, and monitoring for infrastructure changes. Lower-priority actors receive baseline defences without specific tailoring.

Information Sharing

Threat intelligence value multiplies through sharing. Individual organisations possess limited visibility into the threat landscape; shared intelligence provides collective visibility exceeding what any organisation achieves alone.

Sharing Mechanisms

Bilateral sharing exchanges intelligence between two organisations with trust relationships. Bilateral sharing suits sensitive intelligence inappropriate for broader distribution. Partners might include peer organisations in similar operating contexts, implementing partners, and trusted commercial relationships.

Community sharing distributes intelligence among member organisations through ISACs and coordination bodies. Community sharing provides broader visibility than bilateral relationships while maintaining trust through membership vetting. Effective communities establish sharing protocols defining what members share, how they share, and how recipients may use shared intelligence.

Public sharing publishes intelligence without access restrictions. Public sharing suits intelligence valuable to broad audiences without sensitivity concerns. Blog posts, conference presentations, and public reports represent public sharing mechanisms.

Traffic Light Protocol (TLP) standardises sharing restrictions using colour codes. TLP:RED restricts to specific recipients only. TLP:AMBER permits sharing within recipient organisations and with clients needing the information. TLP:GREEN permits sharing within the recipient community. TLP:CLEAR (formerly TLP:WHITE) permits unrestricted sharing. Applying TLP markings ensures recipients understand permitted uses.

Sharing Barriers

Reluctance to share stems from several concerns. Fear of reputational damage discourages disclosing incidents. Legal uncertainty about liability for sharing creates caution. Lack of confidence in reciprocity limits willingness to share when others might not contribute. Resource constraints limit time available for preparing shareable intelligence.

Addressing barriers requires organisational commitment and community norms. Leadership endorsement of sharing as strategic priority overcomes reputational concerns. Legal guidance on liability protection under information sharing frameworks addresses legal uncertainty. Community expectations of reciprocity, enforced through membership requirements, encourage contribution. Streamlined sharing mechanisms reduce resource requirements.

Receiving and Contributing

Receiving shared intelligence requires mechanisms to ingest, evaluate, and act on external intelligence. Receiving processes should integrate shared intelligence with internal intelligence workflows rather than treating it as separate. Evaluation should assess reliability and relevance using the same criteria applied to other sources.

Contributing intelligence requires preparing organisational intelligence for external consumption. Preparation involves sanitising information to remove sensitive details while preserving analytical value, applying appropriate TLP markings, formatting for recipient systems, and routing through approved sharing channels. Contributions demonstrate commitment to community reciprocity and build relationships that improve intelligence received.

Implementation Considerations

Threat intelligence programmes scale from minimal capability achievable by a single security person to comprehensive programmes with dedicated analysts. Implementation should match organisational resources and threat exposure.

Minimal Capability

Organisations with limited security resources can establish basic threat intelligence through low-cost activities. Subscribing to sector mailing lists such as NGO-ISAC provides relevant intelligence without cost. Following security researchers tracking relevant threat actors provides early warning of emerging threats. Reviewing vendor threat reports provides strategic context. These activities require hours per week rather than dedicated staff.

Basic IOC integration uses built-in capabilities of existing security tools. Email security services include threat intelligence; ensuring these features are enabled provides detection without additional tools. Firewall and endpoint protection products include threat feeds; reviewing and enabling these feeds extends coverage. Cloud services such as Microsoft 365 include threat intelligence features; configuring these features leverages included capabilities.

Internal intelligence production begins with documenting incidents. Recording details of phishing attempts, blocked malware, and suspicious activity creates organisational intelligence. Sharing this intelligence with sector communities contributes to collective defence while building relationships that provide access to shared intelligence.

Developing Capability

Organisations with security teams can develop more structured programmes. Establishing formal intelligence requirements aligns collection with organisational needs. Designating intelligence responsibilities ensures someone owns the function even if not full-time. Implementing a TIP, even a simple deployment of MISP, enables systematic indicator management.

Developing analytical capability moves beyond IOC consumption to producing organisational intelligence products. Products might include quarterly threat landscape assessments, briefings on specific threat actors, and advisories on emerging threats. Producing analysis requires understanding the threat landscape well enough to contextualise raw intelligence.

Expanding information sharing involves active participation in sharing communities. Contributing intelligence, attending community calls, and building bilateral relationships with peer organisations increases intelligence access and influence within communities.

Advanced Capability

Organisations with dedicated security functions and high threat exposure may justify substantial intelligence investment. Dedicated intelligence analysts focus full-time on collection, analysis, and production. Advanced analytical tools enable sophisticated correlation and visualisation. Commercial intelligence services supplement internal capability.

Attribution analysis attempts to identify specific threat actors behind attacks. Attribution enables targeted defence and potentially legal or diplomatic response. Attribution requires substantial analytical expertise and access to intelligence enabling connection between observed activity and known actors.

Predictive analysis attempts to anticipate threat actor actions before they occur. Prediction relies on understanding actor motivations, capabilities, and patterns well enough to project future behaviour. Predictive analysis informs proactive defence measures and risk assessments.

Counter-intelligence considers how adversaries perceive the organisation and attempts to shape that perception. Counter-intelligence activities might include monitoring for reconnaissance against the organisation, understanding what information about the organisation is publicly available, and managing digital footprint to reduce targeting information.

Field Context Considerations

Intelligence programmes supporting field operations face additional considerations. Field offices operate in threat environments distinct from headquarters, requiring localised threat assessment. Communications with field staff may traverse hostile networks, requiring secure channels for intelligence distribution. Field staff may encounter physical threats connected to digital threats, requiring integration between digital and physical security intelligence.

Intelligence relevant to field contexts includes threats to communications infrastructure, surveillance capabilities of local actors, border crossing risks, and threats to specific programmes. Producing field-relevant intelligence requires understanding operational context that headquarters-based analysts may lack; field staff input to intelligence requirements ensures relevance.

Distributing intelligence to field staff requires accessible formats. Technical IOCs have limited utility for non-technical staff; actionable guidance translates technical intelligence into behaviours. A bulletin advising staff to avoid specific software because of vulnerabilities provides more actionable guidance than technical details about those vulnerabilities.

See also