SIEM Implementation
A Security Information and Event Management (SIEM) platform aggregates log data from across your infrastructure, normalises events into a common schema, correlates activity patterns, and generates alerts when detection rules match. This task covers deploying a SIEM platform, integrating log sources, developing detection rules, and tuning alerts to achieve effective threat detection without overwhelming your team with false positives.
The procedure applies to self-hosted open source platforms (Wazuh, Graylog) and cloud-native commercial options (Microsoft Sentinel). Platform-specific steps appear in clearly marked variants. Budget 40 to 80 hours for initial deployment depending on infrastructure complexity and log source count.
Prerequisites
| Requirement | Specification |
|---|---|
| Infrastructure | Linux server with 8+ CPU cores, 32GB RAM, 500GB SSD for up to 50 log sources; scale linearly for additional sources |
| Network access | Inbound connectivity from all log sources on syslog (UDP/TCP 514), Beats (TCP 5044), or HTTPS (TCP 443) |
| Storage | 1GB per day per 1000 events per second (EPS) baseline; plan for 90 days online retention minimum |
| Identity provider | SAML 2.0 or OIDC integration for analyst authentication |
| Administrative access | Root or sudo on deployment server; administrative credentials for log source systems |
| DNS | Resolvable hostname for SIEM endpoint (e.g., siem.example.org) |
| TLS certificate | Valid certificate for SIEM web interface and log ingestion endpoints |
Verify server resources meet requirements:
# Check CPU cores (minimum 8)nproc# Expected: 8 or higher
# Check available RAM (minimum 32GB)free -h | grep Mem# Expected: Mem: 31Gi or higher
# Check available storagedf -h /var# Expected: 500G+ availableConfirm network connectivity from a representative log source:
# Test syslog connectivitync -zv siem.example.org 514# Expected: Connection to siem.example.org 514 port [tcp/syslog] succeeded!
# Test Beats connectivitync -zv siem.example.org 5044# Expected: Connection to siem.example.org 5044 port [tcp/*] succeeded!Procedure
Phase 1: Platform deployment
The deployment architecture positions the SIEM to receive logs from all network segments while maintaining security boundaries. A single-node deployment suits organisations processing under 5000 events per second. Multi-node clusters become necessary above this threshold or when high availability is required.
+------------------------------------------------------------------+| NETWORK ARCHITECTURE |+------------------------------------------------------------------+| || +-----------------------+ +-----------------------+ || | CORPORATE SEGMENT | | FIELD SEGMENT | || | | | | || | +-------+ +-------+ | | +-------+ +-------+ | || | | AD | | Mail | | | | Field | | Field | | || | | DC | | Server| | | | Linux | | Linux | | || | +---+---+ +---+---+ | | +---+---+ +---+---+ | || | | | | | | | | || +-----|---------|-------+ +-----|---------|-------+ || | | | | || +----+----+ +----+----+ || | | || | +---------------+ | || +---->| FIREWALL |<------+ || | (log fwd) | || +-------+-------+ || | || +-------v-------+ || | | || | SIEM SERVER | || | | || | - Wazuh/Graylog || | - 8 cores | || | - 32GB RAM | || | - 500GB SSD | || +---------------+ || |+------------------------------------------------------------------+Figure 1: SIEM deployment architecture with log flow from multiple network segments
Select your deployment variant and follow the corresponding steps.
Variant A: Wazuh deployment
Wazuh provides integrated SIEM and endpoint detection capabilities. The all-in-one installation deploys the Wazuh indexer (based on OpenSearch), Wazuh server, and Wazuh dashboard on a single node.
Download and run the Wazuh installation assistant:
Terminal window curl -sO https://packages.wazuh.com/4.7/wazuh-install.shchmod +x wazuh-install.shsudo ./wazuh-install.sh -aThe installation takes 10 to 15 minutes. Upon completion, the script outputs administrative credentials including the username
adminand a generated password. Record these credentials securely. You will change the password after initial login.Verify all services are running:
Terminal window sudo systemctl status wazuh-managersudo systemctl status wazuh-indexersudo systemctl status wazuh-dashboardEach service should show
active (running). If any service showsfailed, check/var/log/wazuh-install.logfor errors.Configure TLS certificates for production use. The installation generates self-signed certificates by default. Replace these with certificates from your certificate authority:
Terminal window # Stop servicessudo systemctl stop wazuh-dashboard# Replace certificatessudo cp /path/to/your/certificate.crt /etc/wazuh-dashboard/certs/wazuh-dashboard.crtsudo cp /path/to/your/private.key /etc/wazuh-dashboard/certs/wazuh-dashboard.keysudo cp /path/to/your/ca.crt /etc/wazuh-dashboard/certs/ca.crt# Set permissionssudo chown wazuh-dashboard:wazuh-dashboard /etc/wazuh-dashboard/certs/*sudo chmod 400 /etc/wazuh-dashboard/certs/*.key# Restart servicesudo systemctl start wazuh-dashboardConfigure memory allocation based on your server resources. Edit
/etc/wazuh-indexer/jvm.options:# Set heap size to 50% of available RAM, maximum 31GB-Xms16g-Xmx16gFor a 32GB server, allocate 16GB to the indexer. Restart the indexer after changes:
Terminal window sudo systemctl restart wazuh-indexerAccess the web interface at
https://siem.example.organd change the default password immediately through Settings > Security > Internal users.
Variant B: Graylog deployment
Graylog separates the web interface and processing engine from the search backend, providing flexibility in scaling components independently. This deployment uses MongoDB for configuration storage and OpenSearch for log indexing.
Install prerequisites. Graylog requires Java 17, MongoDB 6.0, and OpenSearch 2.x:
Terminal window # Install Java 17sudo apt updatesudo apt install openjdk-17-jre-headless -y# Verify Java versionjava -version# Expected: openjdk version "17.0.x"Install and configure MongoDB:
Terminal window # Add MongoDB repositorycurl -fsSL https://pgp.mongodb.com/server-6.0.asc | \sudo gpg -o /usr/share/keyrings/mongodb-server-6.0.gpg --dearmorecho "deb [ signed-by=/usr/share/keyrings/mongodb-server-6.0.gpg ] \https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/6.0 multiverse" | \sudo tee /etc/apt/sources.list.d/mongodb-org-6.0.listsudo apt updatesudo apt install mongodb-org -y# Start and enable MongoDBsudo systemctl enable --now mongodInstall and configure OpenSearch:
Terminal window # Add OpenSearch repositorycurl -fsSL https://artifacts.opensearch.org/publickeys/opensearch.pgp | \sudo gpg --dearmor -o /usr/share/keyrings/opensearch-keyringecho "deb [signed-by=/usr/share/keyrings/opensearch-keyring] \https://artifacts.opensearch.org/releases/bundle/opensearch/2.x/apt stable main" | \sudo tee /etc/apt/sources.list.d/opensearch-2.x.listsudo apt updatesudo apt install opensearch -yConfigure OpenSearch for Graylog. Edit
/etc/opensearch/opensearch.yml:cluster.name: graylognode.name: siem-node-1path.data: /var/lib/opensearchpath.logs: /var/log/opensearchnetwork.host: 127.0.0.1discovery.type: single-nodeplugins.security.disabled: trueSet JVM heap size in
/etc/opensearch/jvm.optionsto 8GB, then start OpenSearch:Terminal window sudo systemctl enable --now opensearchInstall Graylog:
Terminal window wget https://packages.graylog2.org/repo/packages/graylog-5.2-repository_latest.debsudo dpkg -i graylog-5.2-repository_latest.debsudo apt updatesudo apt install graylog-server -yGenerate required secrets for Graylog configuration:
Terminal window # Generate password secret (minimum 64 characters)pwgen -N 1 -s 96# Example output: aB3dE5fG7hI9jK1lM3nO5pQ7rS9tU1vW3xY5zA7bC9dE1fG3hI5jK7lM9nO1pQ3rS5tU7vW9xY1z# Generate SHA-256 hash of your admin passwordecho -n "your-secure-password" | sha256sum | cut -d" " -f1# Example output: 5e884898da28047d9165146ae81b25c4b293e8a29d93ce4f9f3c9f8c8f8c8f8cConfigure Graylog. Edit
/etc/graylog/server/server.conf:# Paste your generated password_secret herepassword_secret = aB3dE5fG7hI9jK1lM3nO5pQ7rS9tU1vW3xY5zA7bC9dE1fG3hI5jK7lM9nO1pQ3rS5tU7vW9xY1z# Paste your SHA-256 hash hereroot_password_sha2 = 5e884898da28047d9165146ae81b25c4b293e8a29d93ce4f9f3c9f8c8f8c8f8c# Set timezoneroot_timezone = UTC# Configure HTTP interfacehttp_bind_address = 0.0.0.0:9000http_external_uri = https://siem.example.org/# OpenSearch connectionelasticsearch_hosts = http://127.0.0.1:9200Start Graylog and enable on boot:
Terminal window sudo systemctl enable --now graylog-serverAccess the web interface at
https://siem.example.org:9000. Log in with usernameadminand the password you hashed in step 6.
Variant C: Microsoft Sentinel deployment
Microsoft Sentinel operates as a cloud-native SIEM within Azure. Deployment requires an Azure subscription with Log Analytics workspace.
Create a Log Analytics workspace using Azure CLI:
Terminal window # Set variablesRESOURCE_GROUP="rg-security-prod"WORKSPACE_NAME="law-sentinel-prod"LOCATION="uksouth"# Create resource group if neededaz group create --name $RESOURCE_GROUP --location $LOCATION# Create Log Analytics workspaceaz monitor log-analytics workspace create \--resource-group $RESOURCE_GROUP \--workspace-name $WORKSPACE_NAME \--location $LOCATION \--sku PerGB2018Enable Microsoft Sentinel on the workspace:
Terminal window # Get workspace resource IDWORKSPACE_ID=$(az monitor log-analytics workspace show \--resource-group $RESOURCE_GROUP \--workspace-name $WORKSPACE_NAME \--query id -o tsv)# Enable Sentinelaz sentinel onboarding-state create \--resource-group $RESOURCE_GROUP \--workspace-name $WORKSPACE_NAME \--name "default"Configure data retention. The default retention is 90 days. For compliance requirements exceeding 90 days:
Terminal window # Set retention to 365 daysaz monitor log-analytics workspace update \--resource-group $RESOURCE_GROUP \--workspace-name $WORKSPACE_NAME \--retention-time 365Retention beyond 90 days incurs additional storage costs. At current pricing, 365-day retention adds approximately £0.10 per GB per month.
Configure commitment tiers if your expected ingestion exceeds 100GB per day. Commitment tiers provide significant cost savings compared to pay-as-you-go pricing at £2.30/GB. The 100GB tier costs approximately £196/day, the 200GB tier costs £368/day, and the 500GB tier costs £860/day. Apply your selected commitment tier:
Terminal window az monitor log-analytics workspace update \--resource-group $RESOURCE_GROUP \--workspace-name $WORKSPACE_NAME \--capacity-reservation-level 100
Phase 2: Log source integration
Log sources fall into three categories based on integration method: agent-based collection for endpoints and servers, syslog forwarding for network devices and appliances, and API-based collection for cloud services. Prioritise integration based on detection value and existing visibility gaps.
+------------------------------------------------------------------+| LOG SOURCE INTEGRATION |+------------------------------------------------------------------+| || AGENT-BASED SYSLOG API-BASED || +-----------+ +-----------+ +-----------+ || | Windows | | Firewall | | Microsoft | || | Servers | | (pfsense) | | 365 | || +-----------+ +-----------+ +-----------+ || | | | || | Wazuh Agent | UDP/TCP 514 | Graph API || | or Beats | (TLS preferred) | (OAuth) || v v v || +-----------+ +-----------+ +-----------+ || | Linux | | Network | | Google | || | Servers | | Switches | | Workspace | || +-----------+ +-----------+ +-----------+ || | | | || +----------+------------+----------+------------+ || | | || v v || +------+------+ +------+------+ || | SIEM | | SIEM | || | (Wazuh/ | | (Sentinel) | || | Graylog) | | | || +-------------+ +-------------+ || |+------------------------------------------------------------------+Figure 2: Log source integration methods by source type
Integrate Windows endpoints
Windows event logs provide authentication events, process execution, PowerShell activity, and security alerts. Configure Windows Event Forwarding (WEF) for environments without agents, or deploy the Wazuh agent for comprehensive collection.
For Wazuh deployments, download and install the agent on each Windows endpoint:
Terminal window # Download agent (run as Administrator)Invoke-WebRequest -Uri https://packages.wazuh.com/4.x/windows/wazuh-agent-4.7.0-1.msi `-OutFile wazuh-agent.msi# Install with manager addressmsiexec.exe /i wazuh-agent.msi /q `WAZUH_MANAGER="siem.example.org" `WAZUH_REGISTRATION_SERVER="siem.example.org" `WAZUH_AGENT_GROUP="windows-servers"# Start serviceNET START WazuhSvcVerify the agent registered successfully. On the SIEM server:
Terminal window sudo /var/ossec/bin/agent_control -lThe output lists all registered agents with their status. A successfully registered agent appears as
ID: 001, Name: WIN-SERVER-01, IP: 192.168.1.10, Status: Active.Configure additional Windows event channels for collection. Edit the agent configuration at
C:\Program Files (x86)\ossec-agent\ossec.confto add the Security channel, PowerShell Operational channel, and Sysmon Operational channel:<ossec_config><localfile><location>Security</location><log_format>eventchannel</log_format></localfile><localfile><location>Microsoft-Windows-PowerShell/Operational</location><log_format>eventchannel</log_format></localfile><localfile><location>Microsoft-Windows-Sysmon/Operational</location><log_format>eventchannel</log_format></localfile></ossec_config>Restart the agent to apply configuration:
Terminal window NET STOP WazuhSvcNET START WazuhSvc
Integrate Linux servers
Linux systems generate authentication logs, system events, and application logs. The Wazuh agent collects from standard log paths and monitors file integrity.
Add the Wazuh repository and install the agent:
Terminal window # Add repository keycurl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | \sudo gpg --dearmor -o /usr/share/keyrings/wazuh.gpg# Add repositoryecho "deb [signed-by=/usr/share/keyrings/wazuh.gpg] \https://packages.wazuh.com/4.x/apt/ stable main" | \sudo tee /etc/apt/sources.list.d/wazuh.list# Install agentsudo apt updatesudo WAZUH_MANAGER="siem.example.org" \WAZUH_AGENT_GROUP="linux-servers" \apt install wazuh-agent -y# Start and enablesudo systemctl enable --now wazuh-agentVerify agent connection:
Terminal window sudo cat /var/ossec/logs/ossec.log | grep "Connected to"# Expected: Connected to the server (siem.example.org)Add application-specific log paths. Edit
/var/ossec/etc/ossec.confto include your application logs:<ossec_config><localfile><log_format>syslog</log_format><location>/var/log/nginx/access.log</location></localfile><localfile><log_format>syslog</log_format><location>/var/log/nginx/error.log</location></localfile><localfile><log_format>json</log_format><location>/var/log/app/application.json</location></localfile></ossec_config>
Integrate network devices via syslog
Network devices, firewalls, and appliances support syslog forwarding. Configure devices to send logs to your SIEM’s syslog receiver.
For Wazuh, enable the syslog receiver. Edit
/var/ossec/etc/ossec.conf:<ossec_config><remote><connection>syslog</connection><port>514</port><protocol>tcp</protocol><allowed-ips>192.168.0.0/16</allowed-ips></remote></ossec_config>Restart the manager:
Terminal window sudo systemctl restart wazuh-managerFor Graylog, create a Syslog TCP input through the web interface. Navigate to System > Inputs > Select Input > Syslog TCP > Launch new input. Configure the input with title “Network Syslog”, bind address 0.0.0.0, port 514, and enable “Store full message”.
Configure your firewall to forward syslog. For pfSense, navigate to Status > System Logs > Settings. Under Remote Logging Options, enable Remote Logging, set the remote log server to
siem.example.org:514, and select all relevant syslog content categories.Verify log reception. On Wazuh:
Terminal window sudo tail -f /var/ossec/logs/archives/archives.log | grep "pfsense"You should see firewall events within 60 seconds of configuration.
Integrate cloud services
Cloud services require API-based integration using service-specific connectors or log export configurations.
For Microsoft 365 with Sentinel, enable the Microsoft 365 data connector:
Terminal window # Enable Office 365 connectoraz sentinel data-connector connect \--resource-group $RESOURCE_GROUP \--workspace-name $WORKSPACE_NAME \--data-connector-id "Office365" \--kind "Office365" \--tenant-id "your-tenant-id" \--exchange true \--sharepoint true \--teams trueFor Microsoft 365 with Wazuh or Graylog, configure Office 365 Management Activity API. Create an Azure AD application with
az ad app create --display-name "SIEM-O365-Integration"and note the Application (client) ID from the output. Grant the application the Office 365 Management APIs permissions for ActivityFeed.Read and ActivityFeed.ReadDlp through the Azure portal.For Google Workspace, export logs to Cloud Logging and configure a Pub/Sub export to your SIEM. In Google Cloud Console, navigate to Logging > Logs Router, create a sink with a Pub/Sub topic as the destination, and configure your SIEM to subscribe to that topic using appropriate credentials.
Phase 3: Detection rule development
Detection rules transform raw log data into actionable alerts. Effective rules balance sensitivity (detecting threats) against specificity (avoiding false positives). Start with high-fidelity rules targeting known attack patterns, then expand coverage based on your threat model.
+------------------------------------------------------------------+| RULE DEVELOPMENT WORKFLOW |+------------------------------------------------------------------+| || +-------------+ +-------------+ +-------------+ || | Threat | | Data | | Rule | || | Model +---->| Mapping +---->| Logic | || | | | | | | || | What attacks| | What logs | | What | || | matter? | | contain | | conditions | || | | | evidence? | | indicate | || +-------------+ +-------------+ | attack? | || +------+------+ || | || v || +-------------+ +-------------+ +------+------+ || | Deploy | | Tune |<----+ Test | || | to |<----+ (reduce | | (validate | || | Production | | FPs) | | detection) | || +-------------+ +-------------+ +-------------+ || |+------------------------------------------------------------------+Figure 3: Detection rule development workflow from threat model to deployment
Develop authentication anomaly rules
Authentication events provide high-value detection opportunities. These rules detect brute force attempts, credential stuffing, and suspicious login patterns.
Create a brute force detection rule. For Wazuh, add to
/var/ossec/etc/rules/local_rules.xml:<group name="authentication,brute_force"><!-- Trigger on 5 failed logins within 60 seconds --><rule id="100001" level="10" frequency="5" timeframe="60"><if_matched_sid>5710</if_matched_sid><same_source_ip /><description>Brute force attack detected from $(srcip)</description><mitre><id>T1110</id></mitre></rule></group>This rule fires when the same source IP generates 5 failed authentication events (rule 5710) within 60 seconds.
Create a rule for authentication from unusual locations. This requires GeoIP enrichment:
<group name="authentication,geolocation"><rule id="100002" level="8"><if_sid>5715</if_sid><geoip_not>GB,IE,NL,DE,FR</geoip_not><description>Successful login from unexpected country: $(geoip.country_code)</description><mitre><id>T1078</id></mitre></rule></group>Adjust the country list to match your organisation’s operating locations.
Test rule triggers using the rule testing utility:
Terminal window sudo /var/ossec/bin/wazuh-logtestPaste a sample log line such as
Mar 15 10:23:45 server sshd[12345]: Failed password for admin from 203.0.113.50 port 54321 ssh2and verify the rule matches. The expected output shows rule 100001 triggered after the threshold is met.
Develop privilege escalation rules
Privilege escalation detection identifies attempts to gain elevated access beyond initial compromise.
Detect sudo usage by non-administrative users:
<group name="privilege_escalation,sudo"><rule id="100010" level="7"><if_sid>5401</if_sid><user_not>admin|serviceaccount|ansible</user_not><description>Sudo executed by non-administrative user: $(dstuser)</description><mitre><id>T1548.003</id></mitre></rule></group>Detect Windows privilege escalation patterns. Monitor for Event ID 4672 (special privileges assigned):
<group name="privilege_escalation,windows"><rule id="100011" level="6"><if_sid>60106</if_sid><field name="win.system.eventID">4672</field><user_not>SYSTEM|Administrator|ServiceAccount$</user_not><description>Special privileges assigned to $(win.eventdata.subjectUserName)</description><mitre><id>T1134</id></mitre></rule></group>
Develop data exfiltration rules
Data exfiltration rules detect unusual data movement patterns that indicate theft.
Detect large outbound transfers. This requires network flow data or proxy logs:
<group name="exfiltration,network"><rule id="100020" level="8"><if_group>web-log</if_group><field name="bytes_out">^[0-9]{8,}$</field><description>Large outbound transfer detected: $(bytes_out) bytes to $(url)</description><mitre><id>T1048</id></mitre></rule></group>This rule triggers on transfers exceeding 10MB (8+ digit byte count).
Detect uploads to cloud storage services:
<group name="exfiltration,cloud"><rule id="100021" level="6"><if_group>web-log</if_group><url>dropbox.com|drive.google.com|onedrive.live.com|box.com</url><field name="http_method">POST|PUT</field><description>Upload to cloud storage: $(url)</description><mitre><id>T1567</id></mitre></rule></group>
Phase 4: Alert tuning
Untuned SIEM deployments generate thousands of alerts daily, leading to analyst fatigue and missed threats. Systematic tuning reduces false positives while maintaining detection coverage. Plan for 2 to 4 weeks of tuning after initial deployment.
Establish baseline alert volume. Run the SIEM for 7 days without tuning to understand normal alert patterns:
Terminal window # Wazuh: Count alerts by rule over past 7 dayssudo cat /var/ossec/logs/alerts/alerts.json | \jq -r '.rule.id' | sort | uniq -c | sort -rn | head -20Sample output shows rule 5710 (failed authentication) generated 4521 alerts, rule 5501 generated 2103 alerts, and rule 5402 generated 1892 alerts. High-volume rules indicate likely false positives from legitimate activity.
Identify tuning candidates. Rules generating more than 500 alerts per day require immediate tuning. Rules generating 100 to 500 alerts per day should be reviewed within the first week. Rules generating 10 to 100 alerts per day can be reviewed within the first month. Rules generating under 10 alerts per day should be monitored for false negatives.
Apply exclusions for known benign activity. For the high-volume failed authentication rule, exclude service accounts with expected failures:
<rule id="100001" level="10" frequency="5" timeframe="60"><if_matched_sid>5710</if_matched_sid><same_source_ip /><user_not>healthcheck|monitoring|backup</user_not><description>Brute force attack detected from $(srcip)</description></rule>Adjust thresholds based on environment. If legitimate users occasionally trigger brute force rules due to password manager sync issues, increase the threshold:
<!-- Increase from 5 to 10 failures, extend timeframe to 120 seconds --><rule id="100001" level="10" frequency="10" timeframe="120">Document all tuning decisions. Create a tuning log recording the rule ID modified, original alert volume, tuning applied, post-tuning alert volume, and justification for the change. This documentation supports audit requirements and helps onboard new analysts.
Review tuning effectiveness after 7 days:
Terminal window # Compare pre and post tuning volumessudo cat /var/ossec/logs/alerts/alerts.json | \jq -r 'select(.timestamp > "2024-03-15") | .rule.id' | \sort | uniq -c | sort -rn | head -20Target a 60 to 80 percent reduction in noise alerts while maintaining detection of true positives.
Phase 5: Retention and storage configuration
Log retention balances compliance requirements, investigation needs, and storage costs. Configure tiered retention with hot storage for recent data requiring fast queries and cold storage for archival compliance.
Calculate storage requirements based on your log volume. Measure current ingestion:
Terminal window # Wazuh: Check daily index sizecurl -s "localhost:9200/_cat/indices/wazuh-alerts-*?h=store.size,docs.count" | \awk '{sum+=$1} END {print sum/NR " GB per day average"}'Typical volumes vary by source: a busy Windows server generates 500MB to 2GB daily, a Linux server generates 100MB to 500MB, a firewall serving 1000 users generates 2GB to 10GB, and a web proxy for 1000 users generates 5GB to 20GB.
Configure index lifecycle management for Wazuh. Edit
/etc/wazuh-indexer/opensearch.yml:# Enable ISMplugins.index_state_management.enabled: trueCreate an ISM policy via the dashboard or API:
Terminal window curl -X PUT "localhost:9200/_plugins/_ism/policies/wazuh-retention" \-H 'Content-Type: application/json' -d '{"policy": {"description": "Wazuh alert retention policy","default_state": "hot","states": [{"name": "hot","actions": [],"transitions": [{"state_name": "warm","conditions": { "min_index_age": "7d" }}]},{"name": "warm","actions": [{ "replica_count": { "number_of_replicas": 0 } }],"transitions": [{"state_name": "cold","conditions": { "min_index_age": "30d" }}]},{"name": "cold","actions": [{ "read_only": {} }],"transitions": [{"state_name": "delete","conditions": { "min_index_age": "365d" }}]},{"name": "delete","actions": [{ "delete": {} }]}],"ism_template": {"index_patterns": ["wazuh-alerts-*"]}}}'For Graylog, configure index rotation in System > Indices > Default index set. Set the index rotation strategy to Index Time with a rotation period of P1D (daily), the retention strategy to Delete, and the maximum number of indices to 365.
Configure archival export for compliance retention beyond online storage. Set up daily export to object storage:
Terminal window # Create snapshot repositorycurl -X PUT "localhost:9200/_snapshot/s3_archive" \-H 'Content-Type: application/json' -d '{"type": "s3","settings": {"bucket": "siem-archive-prod","region": "eu-west-2","base_path": "opensearch-snapshots"}}'# Create snapshot policycurl -X PUT "localhost:9200/_plugins/_sm/policies/daily-archive" \-H 'Content-Type: application/json' -d '{"description": "Daily archive to S3","creation": {"schedule": { "cron": { "expression": "0 2 * * *" } },"time_limit": "1h"},"snapshot_config": {"indices": "wazuh-alerts-*","repository": "s3_archive"},"retention": {"expire_after": "7y"}}'
Verification
Confirm your SIEM deployment is functioning correctly by validating each component.
Platform health
# Wazuh: Check all componentssudo /var/ossec/bin/wazuh-control status# Expected: All daemons running
# Check indexer cluster healthcurl -s "localhost:9200/_cluster/health?pretty" | grep status# Expected: "status" : "green"
# Check recent indexingcurl -s "localhost:9200/_cat/indices/wazuh-alerts-*?v&s=index:desc" | head -5# Expected: Recent indices with document countsLog ingestion
# Verify events arriving from each source type# Windows agentssudo cat /var/ossec/logs/archives/archives.json | \ jq 'select(.agent.name | test("WIN"))' | head -1
# Linux agentssudo cat /var/ossec/logs/archives/archives.json | \ jq 'select(.agent.name | test("linux"))' | head -1
# Syslog sourcessudo cat /var/ossec/logs/archives/archives.json | \ jq 'select(.predecoder.program_name == "pfsense")' | head -1Each query should return recent events from the respective source type.
Detection rules
Trigger a test alert to verify the detection pipeline:
# Generate failed logins to trigger brute force rulefor i in {1..10}; do ssh -o ConnectTimeout=2 invaliduser@localhost 2>/dev/nulldone
# Check for resulting alert (wait 60 seconds for processing)sleep 60sudo cat /var/ossec/logs/alerts/alerts.json | \ jq 'select(.rule.id == "100001")' | tail -1The query should return your brute force alert with details of the triggering events.
Alert delivery
Verify alerts reach their configured destinations (email, ticketing system, or messaging platform):
# Check Wazuh integrations logsudo tail -50 /var/ossec/logs/integrations.log | grep -i "sent\|delivered"Troubleshooting
| Symptom | Cause | Resolution |
|---|---|---|
| Agent shows “Disconnected” in dashboard | Network connectivity or firewall blocking | Verify port 1514/TCP open between agent and manager; check agent log at /var/ossec/logs/ossec.log for connection errors |
| ”No indices matching pattern” error | Indexer not receiving data or index naming mismatch | Verify indexer is running; check index names with curl localhost:9200/_cat/indices; ensure index pattern matches actual indices |
| High CPU usage on SIEM server | Insufficient resources or inefficient rules | Check rule complexity; increase hardware resources; implement rule grouping to reduce evaluation overhead |
| Alerts not appearing despite log ingestion | Rule not matching or alert level below threshold | Test rules with wazuh-logtest; verify rule level meets minimum alert threshold (default 3) |
| “Queue full” messages in logs | Ingestion exceeding processing capacity | Increase queue size in ossec.conf; add processing threads; consider horizontal scaling |
| GeoIP lookups failing | GeoIP database not installed or outdated | Download MaxMind GeoLite2 database; configure path in ossec.conf; verify database file permissions |
| Duplicate alerts for same event | Multiple rules matching same log line | Review rule ordering and use if_sid for hierarchical rules; implement rule exclusions |
| Storage filling rapidly | Retention policy not applied or excessive logging | Verify ISM policy attached to indices; review log source verbosity; increase storage or reduce retention |
| Dashboard login fails | Certificate mismatch or authentication backend issue | Verify certificates match hostname; check identity provider integration; review dashboard logs at /var/log/wazuh-dashboard/ |
| API queries timing out | Index size too large for query | Implement index rollover; use time-bounded queries; increase query timeout in client |
| Windows agent not collecting PowerShell logs | Event channel not configured | Add PowerShell Operational channel to agent ossec.conf; verify channel exists with wevtutil el |
| Syslog events not parsing correctly | Parser not matching log format | Test with wazuh-logtest; create custom decoder matching your device’s syslog format |