Skip to main content

SIEM Implementation

A Security Information and Event Management (SIEM) platform aggregates log data from across your infrastructure, normalises events into a common schema, correlates activity patterns, and generates alerts when detection rules match. This task covers deploying a SIEM platform, integrating log sources, developing detection rules, and tuning alerts to achieve effective threat detection without overwhelming your team with false positives.

The procedure applies to self-hosted open source platforms (Wazuh, Graylog) and cloud-native commercial options (Microsoft Sentinel). Platform-specific steps appear in clearly marked variants. Budget 40 to 80 hours for initial deployment depending on infrastructure complexity and log source count.

Prerequisites

RequirementSpecification
InfrastructureLinux server with 8+ CPU cores, 32GB RAM, 500GB SSD for up to 50 log sources; scale linearly for additional sources
Network accessInbound connectivity from all log sources on syslog (UDP/TCP 514), Beats (TCP 5044), or HTTPS (TCP 443)
Storage1GB per day per 1000 events per second (EPS) baseline; plan for 90 days online retention minimum
Identity providerSAML 2.0 or OIDC integration for analyst authentication
Administrative accessRoot or sudo on deployment server; administrative credentials for log source systems
DNSResolvable hostname for SIEM endpoint (e.g., siem.example.org)
TLS certificateValid certificate for SIEM web interface and log ingestion endpoints

Verify server resources meet requirements:

Terminal window
# Check CPU cores (minimum 8)
nproc
# Expected: 8 or higher
# Check available RAM (minimum 32GB)
free -h | grep Mem
# Expected: Mem: 31Gi or higher
# Check available storage
df -h /var
# Expected: 500G+ available

Confirm network connectivity from a representative log source:

Terminal window
# Test syslog connectivity
nc -zv siem.example.org 514
# Expected: Connection to siem.example.org 514 port [tcp/syslog] succeeded!
# Test Beats connectivity
nc -zv siem.example.org 5044
# Expected: Connection to siem.example.org 5044 port [tcp/*] succeeded!

Procedure

Phase 1: Platform deployment

The deployment architecture positions the SIEM to receive logs from all network segments while maintaining security boundaries. A single-node deployment suits organisations processing under 5000 events per second. Multi-node clusters become necessary above this threshold or when high availability is required.

+------------------------------------------------------------------+
| NETWORK ARCHITECTURE |
+------------------------------------------------------------------+
| |
| +-----------------------+ +-----------------------+ |
| | CORPORATE SEGMENT | | FIELD SEGMENT | |
| | | | | |
| | +-------+ +-------+ | | +-------+ +-------+ | |
| | | AD | | Mail | | | | Field | | Field | | |
| | | DC | | Server| | | | Linux | | Linux | | |
| | +---+---+ +---+---+ | | +---+---+ +---+---+ | |
| | | | | | | | | |
| +-----|---------|-------+ +-----|---------|-------+ |
| | | | | |
| +----+----+ +----+----+ |
| | | |
| | +---------------+ | |
| +---->| FIREWALL |<------+ |
| | (log fwd) | |
| +-------+-------+ |
| | |
| +-------v-------+ |
| | | |
| | SIEM SERVER | |
| | | |
| | - Wazuh/Graylog |
| | - 8 cores | |
| | - 32GB RAM | |
| | - 500GB SSD | |
| +---------------+ |
| |
+------------------------------------------------------------------+

Figure 1: SIEM deployment architecture with log flow from multiple network segments

Select your deployment variant and follow the corresponding steps.

Variant A: Wazuh deployment

Wazuh provides integrated SIEM and endpoint detection capabilities. The all-in-one installation deploys the Wazuh indexer (based on OpenSearch), Wazuh server, and Wazuh dashboard on a single node.

  1. Download and run the Wazuh installation assistant:

    Terminal window
    curl -sO https://packages.wazuh.com/4.7/wazuh-install.sh
    chmod +x wazuh-install.sh
    sudo ./wazuh-install.sh -a

    The installation takes 10 to 15 minutes. Upon completion, the script outputs administrative credentials including the username admin and a generated password. Record these credentials securely. You will change the password after initial login.

  2. Verify all services are running:

    Terminal window
    sudo systemctl status wazuh-manager
    sudo systemctl status wazuh-indexer
    sudo systemctl status wazuh-dashboard

    Each service should show active (running). If any service shows failed, check /var/log/wazuh-install.log for errors.

  3. Configure TLS certificates for production use. The installation generates self-signed certificates by default. Replace these with certificates from your certificate authority:

    Terminal window
    # Stop services
    sudo systemctl stop wazuh-dashboard
    # Replace certificates
    sudo cp /path/to/your/certificate.crt /etc/wazuh-dashboard/certs/wazuh-dashboard.crt
    sudo cp /path/to/your/private.key /etc/wazuh-dashboard/certs/wazuh-dashboard.key
    sudo cp /path/to/your/ca.crt /etc/wazuh-dashboard/certs/ca.crt
    # Set permissions
    sudo chown wazuh-dashboard:wazuh-dashboard /etc/wazuh-dashboard/certs/*
    sudo chmod 400 /etc/wazuh-dashboard/certs/*.key
    # Restart service
    sudo systemctl start wazuh-dashboard
  4. Configure memory allocation based on your server resources. Edit /etc/wazuh-indexer/jvm.options:

    # Set heap size to 50% of available RAM, maximum 31GB
    -Xms16g
    -Xmx16g

    For a 32GB server, allocate 16GB to the indexer. Restart the indexer after changes:

    Terminal window
    sudo systemctl restart wazuh-indexer
  5. Access the web interface at https://siem.example.org and change the default password immediately through Settings > Security > Internal users.

Variant B: Graylog deployment

Graylog separates the web interface and processing engine from the search backend, providing flexibility in scaling components independently. This deployment uses MongoDB for configuration storage and OpenSearch for log indexing.

  1. Install prerequisites. Graylog requires Java 17, MongoDB 6.0, and OpenSearch 2.x:

    Terminal window
    # Install Java 17
    sudo apt update
    sudo apt install openjdk-17-jre-headless -y
    # Verify Java version
    java -version
    # Expected: openjdk version "17.0.x"
  2. Install and configure MongoDB:

    Terminal window
    # Add MongoDB repository
    curl -fsSL https://pgp.mongodb.com/server-6.0.asc | \
    sudo gpg -o /usr/share/keyrings/mongodb-server-6.0.gpg --dearmor
    echo "deb [ signed-by=/usr/share/keyrings/mongodb-server-6.0.gpg ] \
    https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/6.0 multiverse" | \
    sudo tee /etc/apt/sources.list.d/mongodb-org-6.0.list
    sudo apt update
    sudo apt install mongodb-org -y
    # Start and enable MongoDB
    sudo systemctl enable --now mongod
  3. Install and configure OpenSearch:

    Terminal window
    # Add OpenSearch repository
    curl -fsSL https://artifacts.opensearch.org/publickeys/opensearch.pgp | \
    sudo gpg --dearmor -o /usr/share/keyrings/opensearch-keyring
    echo "deb [signed-by=/usr/share/keyrings/opensearch-keyring] \
    https://artifacts.opensearch.org/releases/bundle/opensearch/2.x/apt stable main" | \
    sudo tee /etc/apt/sources.list.d/opensearch-2.x.list
    sudo apt update
    sudo apt install opensearch -y
  4. Configure OpenSearch for Graylog. Edit /etc/opensearch/opensearch.yml:

    cluster.name: graylog
    node.name: siem-node-1
    path.data: /var/lib/opensearch
    path.logs: /var/log/opensearch
    network.host: 127.0.0.1
    discovery.type: single-node
    plugins.security.disabled: true

    Set JVM heap size in /etc/opensearch/jvm.options to 8GB, then start OpenSearch:

    Terminal window
    sudo systemctl enable --now opensearch
  5. Install Graylog:

    Terminal window
    wget https://packages.graylog2.org/repo/packages/graylog-5.2-repository_latest.deb
    sudo dpkg -i graylog-5.2-repository_latest.deb
    sudo apt update
    sudo apt install graylog-server -y
  6. Generate required secrets for Graylog configuration:

    Terminal window
    # Generate password secret (minimum 64 characters)
    pwgen -N 1 -s 96
    # Example output: aB3dE5fG7hI9jK1lM3nO5pQ7rS9tU1vW3xY5zA7bC9dE1fG3hI5jK7lM9nO1pQ3rS5tU7vW9xY1z
    # Generate SHA-256 hash of your admin password
    echo -n "your-secure-password" | sha256sum | cut -d" " -f1
    # Example output: 5e884898da28047d9165146ae81b25c4b293e8a29d93ce4f9f3c9f8c8f8c8f8c
  7. Configure Graylog. Edit /etc/graylog/server/server.conf:

    # Paste your generated password_secret here
    password_secret = aB3dE5fG7hI9jK1lM3nO5pQ7rS9tU1vW3xY5zA7bC9dE1fG3hI5jK7lM9nO1pQ3rS5tU7vW9xY1z
    # Paste your SHA-256 hash here
    root_password_sha2 = 5e884898da28047d9165146ae81b25c4b293e8a29d93ce4f9f3c9f8c8f8c8f8c
    # Set timezone
    root_timezone = UTC
    # Configure HTTP interface
    http_bind_address = 0.0.0.0:9000
    http_external_uri = https://siem.example.org/
    # OpenSearch connection
    elasticsearch_hosts = http://127.0.0.1:9200
  8. Start Graylog and enable on boot:

    Terminal window
    sudo systemctl enable --now graylog-server

    Access the web interface at https://siem.example.org:9000. Log in with username admin and the password you hashed in step 6.

Variant C: Microsoft Sentinel deployment

Microsoft Sentinel operates as a cloud-native SIEM within Azure. Deployment requires an Azure subscription with Log Analytics workspace.

  1. Create a Log Analytics workspace using Azure CLI:

    Terminal window
    # Set variables
    RESOURCE_GROUP="rg-security-prod"
    WORKSPACE_NAME="law-sentinel-prod"
    LOCATION="uksouth"
    # Create resource group if needed
    az group create --name $RESOURCE_GROUP --location $LOCATION
    # Create Log Analytics workspace
    az monitor log-analytics workspace create \
    --resource-group $RESOURCE_GROUP \
    --workspace-name $WORKSPACE_NAME \
    --location $LOCATION \
    --sku PerGB2018
  2. Enable Microsoft Sentinel on the workspace:

    Terminal window
    # Get workspace resource ID
    WORKSPACE_ID=$(az monitor log-analytics workspace show \
    --resource-group $RESOURCE_GROUP \
    --workspace-name $WORKSPACE_NAME \
    --query id -o tsv)
    # Enable Sentinel
    az sentinel onboarding-state create \
    --resource-group $RESOURCE_GROUP \
    --workspace-name $WORKSPACE_NAME \
    --name "default"
  3. Configure data retention. The default retention is 90 days. For compliance requirements exceeding 90 days:

    Terminal window
    # Set retention to 365 days
    az monitor log-analytics workspace update \
    --resource-group $RESOURCE_GROUP \
    --workspace-name $WORKSPACE_NAME \
    --retention-time 365

    Retention beyond 90 days incurs additional storage costs. At current pricing, 365-day retention adds approximately £0.10 per GB per month.

  4. Configure commitment tiers if your expected ingestion exceeds 100GB per day. Commitment tiers provide significant cost savings compared to pay-as-you-go pricing at £2.30/GB. The 100GB tier costs approximately £196/day, the 200GB tier costs £368/day, and the 500GB tier costs £860/day. Apply your selected commitment tier:

    Terminal window
    az monitor log-analytics workspace update \
    --resource-group $RESOURCE_GROUP \
    --workspace-name $WORKSPACE_NAME \
    --capacity-reservation-level 100

Phase 2: Log source integration

Log sources fall into three categories based on integration method: agent-based collection for endpoints and servers, syslog forwarding for network devices and appliances, and API-based collection for cloud services. Prioritise integration based on detection value and existing visibility gaps.

+------------------------------------------------------------------+
| LOG SOURCE INTEGRATION |
+------------------------------------------------------------------+
| |
| AGENT-BASED SYSLOG API-BASED |
| +-----------+ +-----------+ +-----------+ |
| | Windows | | Firewall | | Microsoft | |
| | Servers | | (pfsense) | | 365 | |
| +-----------+ +-----------+ +-----------+ |
| | | | |
| | Wazuh Agent | UDP/TCP 514 | Graph API |
| | or Beats | (TLS preferred) | (OAuth) |
| v v v |
| +-----------+ +-----------+ +-----------+ |
| | Linux | | Network | | Google | |
| | Servers | | Switches | | Workspace | |
| +-----------+ +-----------+ +-----------+ |
| | | | |
| +----------+------------+----------+------------+ |
| | | |
| v v |
| +------+------+ +------+------+ |
| | SIEM | | SIEM | |
| | (Wazuh/ | | (Sentinel) | |
| | Graylog) | | | |
| +-------------+ +-------------+ |
| |
+------------------------------------------------------------------+

Figure 2: Log source integration methods by source type

Integrate Windows endpoints

Windows event logs provide authentication events, process execution, PowerShell activity, and security alerts. Configure Windows Event Forwarding (WEF) for environments without agents, or deploy the Wazuh agent for comprehensive collection.

  1. For Wazuh deployments, download and install the agent on each Windows endpoint:

    Terminal window
    # Download agent (run as Administrator)
    Invoke-WebRequest -Uri https://packages.wazuh.com/4.x/windows/wazuh-agent-4.7.0-1.msi `
    -OutFile wazuh-agent.msi
    # Install with manager address
    msiexec.exe /i wazuh-agent.msi /q `
    WAZUH_MANAGER="siem.example.org" `
    WAZUH_REGISTRATION_SERVER="siem.example.org" `
    WAZUH_AGENT_GROUP="windows-servers"
    # Start service
    NET START WazuhSvc
  2. Verify the agent registered successfully. On the SIEM server:

    Terminal window
    sudo /var/ossec/bin/agent_control -l

    The output lists all registered agents with their status. A successfully registered agent appears as ID: 001, Name: WIN-SERVER-01, IP: 192.168.1.10, Status: Active.

  3. Configure additional Windows event channels for collection. Edit the agent configuration at C:\Program Files (x86)\ossec-agent\ossec.conf to add the Security channel, PowerShell Operational channel, and Sysmon Operational channel:

    <ossec_config>
    <localfile>
    <location>Security</location>
    <log_format>eventchannel</log_format>
    </localfile>
    <localfile>
    <location>Microsoft-Windows-PowerShell/Operational</location>
    <log_format>eventchannel</log_format>
    </localfile>
    <localfile>
    <location>Microsoft-Windows-Sysmon/Operational</location>
    <log_format>eventchannel</log_format>
    </localfile>
    </ossec_config>
  4. Restart the agent to apply configuration:

    Terminal window
    NET STOP WazuhSvc
    NET START WazuhSvc

Integrate Linux servers

Linux systems generate authentication logs, system events, and application logs. The Wazuh agent collects from standard log paths and monitors file integrity.

  1. Add the Wazuh repository and install the agent:

    Terminal window
    # Add repository key
    curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | \
    sudo gpg --dearmor -o /usr/share/keyrings/wazuh.gpg
    # Add repository
    echo "deb [signed-by=/usr/share/keyrings/wazuh.gpg] \
    https://packages.wazuh.com/4.x/apt/ stable main" | \
    sudo tee /etc/apt/sources.list.d/wazuh.list
    # Install agent
    sudo apt update
    sudo WAZUH_MANAGER="siem.example.org" \
    WAZUH_AGENT_GROUP="linux-servers" \
    apt install wazuh-agent -y
    # Start and enable
    sudo systemctl enable --now wazuh-agent
  2. Verify agent connection:

    Terminal window
    sudo cat /var/ossec/logs/ossec.log | grep "Connected to"
    # Expected: Connected to the server (siem.example.org)
  3. Add application-specific log paths. Edit /var/ossec/etc/ossec.conf to include your application logs:

    <ossec_config>
    <localfile>
    <log_format>syslog</log_format>
    <location>/var/log/nginx/access.log</location>
    </localfile>
    <localfile>
    <log_format>syslog</log_format>
    <location>/var/log/nginx/error.log</location>
    </localfile>
    <localfile>
    <log_format>json</log_format>
    <location>/var/log/app/application.json</location>
    </localfile>
    </ossec_config>

Integrate network devices via syslog

Network devices, firewalls, and appliances support syslog forwarding. Configure devices to send logs to your SIEM’s syslog receiver.

  1. For Wazuh, enable the syslog receiver. Edit /var/ossec/etc/ossec.conf:

    <ossec_config>
    <remote>
    <connection>syslog</connection>
    <port>514</port>
    <protocol>tcp</protocol>
    <allowed-ips>192.168.0.0/16</allowed-ips>
    </remote>
    </ossec_config>

    Restart the manager:

    Terminal window
    sudo systemctl restart wazuh-manager
  2. For Graylog, create a Syslog TCP input through the web interface. Navigate to System > Inputs > Select Input > Syslog TCP > Launch new input. Configure the input with title “Network Syslog”, bind address 0.0.0.0, port 514, and enable “Store full message”.

  3. Configure your firewall to forward syslog. For pfSense, navigate to Status > System Logs > Settings. Under Remote Logging Options, enable Remote Logging, set the remote log server to siem.example.org:514, and select all relevant syslog content categories.

  4. Verify log reception. On Wazuh:

    Terminal window
    sudo tail -f /var/ossec/logs/archives/archives.log | grep "pfsense"

    You should see firewall events within 60 seconds of configuration.

Integrate cloud services

Cloud services require API-based integration using service-specific connectors or log export configurations.

  1. For Microsoft 365 with Sentinel, enable the Microsoft 365 data connector:

    Terminal window
    # Enable Office 365 connector
    az sentinel data-connector connect \
    --resource-group $RESOURCE_GROUP \
    --workspace-name $WORKSPACE_NAME \
    --data-connector-id "Office365" \
    --kind "Office365" \
    --tenant-id "your-tenant-id" \
    --exchange true \
    --sharepoint true \
    --teams true
  2. For Microsoft 365 with Wazuh or Graylog, configure Office 365 Management Activity API. Create an Azure AD application with az ad app create --display-name "SIEM-O365-Integration" and note the Application (client) ID from the output. Grant the application the Office 365 Management APIs permissions for ActivityFeed.Read and ActivityFeed.ReadDlp through the Azure portal.

  3. For Google Workspace, export logs to Cloud Logging and configure a Pub/Sub export to your SIEM. In Google Cloud Console, navigate to Logging > Logs Router, create a sink with a Pub/Sub topic as the destination, and configure your SIEM to subscribe to that topic using appropriate credentials.

Phase 3: Detection rule development

Detection rules transform raw log data into actionable alerts. Effective rules balance sensitivity (detecting threats) against specificity (avoiding false positives). Start with high-fidelity rules targeting known attack patterns, then expand coverage based on your threat model.

+------------------------------------------------------------------+
| RULE DEVELOPMENT WORKFLOW |
+------------------------------------------------------------------+
| |
| +-------------+ +-------------+ +-------------+ |
| | Threat | | Data | | Rule | |
| | Model +---->| Mapping +---->| Logic | |
| | | | | | | |
| | What attacks| | What logs | | What | |
| | matter? | | contain | | conditions | |
| | | | evidence? | | indicate | |
| +-------------+ +-------------+ | attack? | |
| +------+------+ |
| | |
| v |
| +-------------+ +-------------+ +------+------+ |
| | Deploy | | Tune |<----+ Test | |
| | to |<----+ (reduce | | (validate | |
| | Production | | FPs) | | detection) | |
| +-------------+ +-------------+ +-------------+ |
| |
+------------------------------------------------------------------+

Figure 3: Detection rule development workflow from threat model to deployment

Develop authentication anomaly rules

Authentication events provide high-value detection opportunities. These rules detect brute force attempts, credential stuffing, and suspicious login patterns.

  1. Create a brute force detection rule. For Wazuh, add to /var/ossec/etc/rules/local_rules.xml:

    <group name="authentication,brute_force">
    <!-- Trigger on 5 failed logins within 60 seconds -->
    <rule id="100001" level="10" frequency="5" timeframe="60">
    <if_matched_sid>5710</if_matched_sid>
    <same_source_ip />
    <description>Brute force attack detected from $(srcip)</description>
    <mitre>
    <id>T1110</id>
    </mitre>
    </rule>
    </group>

    This rule fires when the same source IP generates 5 failed authentication events (rule 5710) within 60 seconds.

  2. Create a rule for authentication from unusual locations. This requires GeoIP enrichment:

    <group name="authentication,geolocation">
    <rule id="100002" level="8">
    <if_sid>5715</if_sid>
    <geoip_not>GB,IE,NL,DE,FR</geoip_not>
    <description>Successful login from unexpected country: $(geoip.country_code)</description>
    <mitre>
    <id>T1078</id>
    </mitre>
    </rule>
    </group>

    Adjust the country list to match your organisation’s operating locations.

  3. Test rule triggers using the rule testing utility:

    Terminal window
    sudo /var/ossec/bin/wazuh-logtest

    Paste a sample log line such as Mar 15 10:23:45 server sshd[12345]: Failed password for admin from 203.0.113.50 port 54321 ssh2 and verify the rule matches. The expected output shows rule 100001 triggered after the threshold is met.

Develop privilege escalation rules

Privilege escalation detection identifies attempts to gain elevated access beyond initial compromise.

  1. Detect sudo usage by non-administrative users:

    <group name="privilege_escalation,sudo">
    <rule id="100010" level="7">
    <if_sid>5401</if_sid>
    <user_not>admin|serviceaccount|ansible</user_not>
    <description>Sudo executed by non-administrative user: $(dstuser)</description>
    <mitre>
    <id>T1548.003</id>
    </mitre>
    </rule>
    </group>
  2. Detect Windows privilege escalation patterns. Monitor for Event ID 4672 (special privileges assigned):

    <group name="privilege_escalation,windows">
    <rule id="100011" level="6">
    <if_sid>60106</if_sid>
    <field name="win.system.eventID">4672</field>
    <user_not>SYSTEM|Administrator|ServiceAccount$</user_not>
    <description>Special privileges assigned to $(win.eventdata.subjectUserName)</description>
    <mitre>
    <id>T1134</id>
    </mitre>
    </rule>
    </group>

Develop data exfiltration rules

Data exfiltration rules detect unusual data movement patterns that indicate theft.

  1. Detect large outbound transfers. This requires network flow data or proxy logs:

    <group name="exfiltration,network">
    <rule id="100020" level="8">
    <if_group>web-log</if_group>
    <field name="bytes_out">^[0-9]{8,}$</field>
    <description>Large outbound transfer detected: $(bytes_out) bytes to $(url)</description>
    <mitre>
    <id>T1048</id>
    </mitre>
    </rule>
    </group>

    This rule triggers on transfers exceeding 10MB (8+ digit byte count).

  2. Detect uploads to cloud storage services:

    <group name="exfiltration,cloud">
    <rule id="100021" level="6">
    <if_group>web-log</if_group>
    <url>dropbox.com|drive.google.com|onedrive.live.com|box.com</url>
    <field name="http_method">POST|PUT</field>
    <description>Upload to cloud storage: $(url)</description>
    <mitre>
    <id>T1567</id>
    </mitre>
    </rule>
    </group>

Phase 4: Alert tuning

Untuned SIEM deployments generate thousands of alerts daily, leading to analyst fatigue and missed threats. Systematic tuning reduces false positives while maintaining detection coverage. Plan for 2 to 4 weeks of tuning after initial deployment.

  1. Establish baseline alert volume. Run the SIEM for 7 days without tuning to understand normal alert patterns:

    Terminal window
    # Wazuh: Count alerts by rule over past 7 days
    sudo cat /var/ossec/logs/alerts/alerts.json | \
    jq -r '.rule.id' | sort | uniq -c | sort -rn | head -20

    Sample output shows rule 5710 (failed authentication) generated 4521 alerts, rule 5501 generated 2103 alerts, and rule 5402 generated 1892 alerts. High-volume rules indicate likely false positives from legitimate activity.

  2. Identify tuning candidates. Rules generating more than 500 alerts per day require immediate tuning. Rules generating 100 to 500 alerts per day should be reviewed within the first week. Rules generating 10 to 100 alerts per day can be reviewed within the first month. Rules generating under 10 alerts per day should be monitored for false negatives.

  3. Apply exclusions for known benign activity. For the high-volume failed authentication rule, exclude service accounts with expected failures:

    <rule id="100001" level="10" frequency="5" timeframe="60">
    <if_matched_sid>5710</if_matched_sid>
    <same_source_ip />
    <user_not>healthcheck|monitoring|backup</user_not>
    <description>Brute force attack detected from $(srcip)</description>
    </rule>
  4. Adjust thresholds based on environment. If legitimate users occasionally trigger brute force rules due to password manager sync issues, increase the threshold:

    <!-- Increase from 5 to 10 failures, extend timeframe to 120 seconds -->
    <rule id="100001" level="10" frequency="10" timeframe="120">
  5. Document all tuning decisions. Create a tuning log recording the rule ID modified, original alert volume, tuning applied, post-tuning alert volume, and justification for the change. This documentation supports audit requirements and helps onboard new analysts.

  6. Review tuning effectiveness after 7 days:

    Terminal window
    # Compare pre and post tuning volumes
    sudo cat /var/ossec/logs/alerts/alerts.json | \
    jq -r 'select(.timestamp > "2024-03-15") | .rule.id' | \
    sort | uniq -c | sort -rn | head -20

    Target a 60 to 80 percent reduction in noise alerts while maintaining detection of true positives.

Phase 5: Retention and storage configuration

Log retention balances compliance requirements, investigation needs, and storage costs. Configure tiered retention with hot storage for recent data requiring fast queries and cold storage for archival compliance.

  1. Calculate storage requirements based on your log volume. Measure current ingestion:

    Terminal window
    # Wazuh: Check daily index size
    curl -s "localhost:9200/_cat/indices/wazuh-alerts-*?h=store.size,docs.count" | \
    awk '{sum+=$1} END {print sum/NR " GB per day average"}'

    Typical volumes vary by source: a busy Windows server generates 500MB to 2GB daily, a Linux server generates 100MB to 500MB, a firewall serving 1000 users generates 2GB to 10GB, and a web proxy for 1000 users generates 5GB to 20GB.

  2. Configure index lifecycle management for Wazuh. Edit /etc/wazuh-indexer/opensearch.yml:

    # Enable ISM
    plugins.index_state_management.enabled: true

    Create an ISM policy via the dashboard or API:

    Terminal window
    curl -X PUT "localhost:9200/_plugins/_ism/policies/wazuh-retention" \
    -H 'Content-Type: application/json' -d '
    {
    "policy": {
    "description": "Wazuh alert retention policy",
    "default_state": "hot",
    "states": [
    {
    "name": "hot",
    "actions": [],
    "transitions": [
    {
    "state_name": "warm",
    "conditions": { "min_index_age": "7d" }
    }
    ]
    },
    {
    "name": "warm",
    "actions": [
    { "replica_count": { "number_of_replicas": 0 } }
    ],
    "transitions": [
    {
    "state_name": "cold",
    "conditions": { "min_index_age": "30d" }
    }
    ]
    },
    {
    "name": "cold",
    "actions": [
    { "read_only": {} }
    ],
    "transitions": [
    {
    "state_name": "delete",
    "conditions": { "min_index_age": "365d" }
    }
    ]
    },
    {
    "name": "delete",
    "actions": [
    { "delete": {} }
    ]
    }
    ],
    "ism_template": {
    "index_patterns": ["wazuh-alerts-*"]
    }
    }
    }'
  3. For Graylog, configure index rotation in System > Indices > Default index set. Set the index rotation strategy to Index Time with a rotation period of P1D (daily), the retention strategy to Delete, and the maximum number of indices to 365.

  4. Configure archival export for compliance retention beyond online storage. Set up daily export to object storage:

    Terminal window
    # Create snapshot repository
    curl -X PUT "localhost:9200/_snapshot/s3_archive" \
    -H 'Content-Type: application/json' -d '
    {
    "type": "s3",
    "settings": {
    "bucket": "siem-archive-prod",
    "region": "eu-west-2",
    "base_path": "opensearch-snapshots"
    }
    }'
    # Create snapshot policy
    curl -X PUT "localhost:9200/_plugins/_sm/policies/daily-archive" \
    -H 'Content-Type: application/json' -d '
    {
    "description": "Daily archive to S3",
    "creation": {
    "schedule": { "cron": { "expression": "0 2 * * *" } },
    "time_limit": "1h"
    },
    "snapshot_config": {
    "indices": "wazuh-alerts-*",
    "repository": "s3_archive"
    },
    "retention": {
    "expire_after": "7y"
    }
    }'

Verification

Confirm your SIEM deployment is functioning correctly by validating each component.

Platform health

Terminal window
# Wazuh: Check all components
sudo /var/ossec/bin/wazuh-control status
# Expected: All daemons running
# Check indexer cluster health
curl -s "localhost:9200/_cluster/health?pretty" | grep status
# Expected: "status" : "green"
# Check recent indexing
curl -s "localhost:9200/_cat/indices/wazuh-alerts-*?v&s=index:desc" | head -5
# Expected: Recent indices with document counts

Log ingestion

Terminal window
# Verify events arriving from each source type
# Windows agents
sudo cat /var/ossec/logs/archives/archives.json | \
jq 'select(.agent.name | test("WIN"))' | head -1
# Linux agents
sudo cat /var/ossec/logs/archives/archives.json | \
jq 'select(.agent.name | test("linux"))' | head -1
# Syslog sources
sudo cat /var/ossec/logs/archives/archives.json | \
jq 'select(.predecoder.program_name == "pfsense")' | head -1

Each query should return recent events from the respective source type.

Detection rules

Trigger a test alert to verify the detection pipeline:

Terminal window
# Generate failed logins to trigger brute force rule
for i in {1..10}; do
ssh -o ConnectTimeout=2 invaliduser@localhost 2>/dev/null
done
# Check for resulting alert (wait 60 seconds for processing)
sleep 60
sudo cat /var/ossec/logs/alerts/alerts.json | \
jq 'select(.rule.id == "100001")' | tail -1

The query should return your brute force alert with details of the triggering events.

Alert delivery

Verify alerts reach their configured destinations (email, ticketing system, or messaging platform):

Terminal window
# Check Wazuh integrations log
sudo tail -50 /var/ossec/logs/integrations.log | grep -i "sent\|delivered"

Troubleshooting

SymptomCauseResolution
Agent shows “Disconnected” in dashboardNetwork connectivity or firewall blockingVerify port 1514/TCP open between agent and manager; check agent log at /var/ossec/logs/ossec.log for connection errors
”No indices matching pattern” errorIndexer not receiving data or index naming mismatchVerify indexer is running; check index names with curl localhost:9200/_cat/indices; ensure index pattern matches actual indices
High CPU usage on SIEM serverInsufficient resources or inefficient rulesCheck rule complexity; increase hardware resources; implement rule grouping to reduce evaluation overhead
Alerts not appearing despite log ingestionRule not matching or alert level below thresholdTest rules with wazuh-logtest; verify rule level meets minimum alert threshold (default 3)
“Queue full” messages in logsIngestion exceeding processing capacityIncrease queue size in ossec.conf; add processing threads; consider horizontal scaling
GeoIP lookups failingGeoIP database not installed or outdatedDownload MaxMind GeoLite2 database; configure path in ossec.conf; verify database file permissions
Duplicate alerts for same eventMultiple rules matching same log lineReview rule ordering and use if_sid for hierarchical rules; implement rule exclusions
Storage filling rapidlyRetention policy not applied or excessive loggingVerify ISM policy attached to indices; review log source verbosity; increase storage or reduce retention
Dashboard login failsCertificate mismatch or authentication backend issueVerify certificates match hostname; check identity provider integration; review dashboard logs at /var/log/wazuh-dashboard/
API queries timing outIndex size too large for queryImplement index rollover; use time-bounded queries; increase query timeout in client
Windows agent not collecting PowerShell logsEvent channel not configuredAdd PowerShell Operational channel to agent ossec.conf; verify channel exists with wevtutil el
Syslog events not parsing correctlyParser not matching log formatTest with wazuh-logtest; create custom decoder matching your device’s syslog format

See also