It sounds like your Heavy Forwarder (HF) is struggling to handle the high volume of Akamai logs (30k events in 5 minutes), which may be causing the GUI to become slow or unresponsive. The error in splunkd.log about the TA-Akamai-SIEM modular input timing out (exceeding 30,000 ms) suggests the modular input script is overloaded. Since data ingestion continues and splunkd is running, the issue is likely related to resource contention or configuration. Here’s how you can troubleshoot and resolve it: Check HF Resource Usage: Monitor CPU, memory, and disk I/O on the HF using top or htop (Linux) or Task Manager (Windows). High resource usage could indicate the HF is overwhelmed by the Akamai log volume. Use the Splunk Monitoring Console (| mcatalog) or | rest /services/server/info to check system metrics like CPU usage or memory consumption on the HF. Tune Modular Input Timeout: The TA-Akamai-SIEM modular input is timing out after 30 seconds (30,000 ms). Increase the timeout in $SPLUNK_HOME/etc/apps/TA-Akamai-SIEM/local/inputs.conf: ini [TA-Akamai-SIEM://<input_name>]
interval = <your_interval>
execution_timeout = 60000 # Increase to 60 seconds Restart the HF after making this change ($SPLUNK_HOME/bin/splunk restart). Optimize TA-Akamai-SIEM Configuration: Check the interval setting for the Akamai input in inputs.conf. A very short interval (e.g., 60 seconds) with high data volume (30k events/5 min) could overload the modular input. Consider increasing the interval (e.g., to 300 seconds) to reduce the frequency of API calls. Verify the API query filters in the TA configuration. Narrow the scope (e.g., specific Akamai configurations or event types) to reduce the data volume if possible. Address GUI Unresponsiveness: The GUI slowdown may be due to splunkd prioritizing data ingestion over web requests. Check $SPLUNK_HOME/etc/system/local/web.conf for max_threads or http_port settings. Increase max_threads if it’s too low: ini [settings]
max_threads = 20 # Default is 10; adjust cautiously Confirm the HF’s web port (default 8000) is accessible via telnet <HF_IP> 8000 from your machine. Inspect splunkd.log Further: Look for additional errors in $SPLUNK_HOME/var/log/splunk/splunkd.log related to TA-Akamai-SIEM or resource exhaustion (e.g., memory or thread limits). Check for errors in $SPLUNK_HOME/var/log/splunk/web_service.log for GUI-specific issues. Scale or Offload Processing: If the HF is underpowered, consider upgrading its hardware (more CPU cores or RAM) to handle the 30k events/5 min load. Alternatively, distribute the load by deploying multiple HFs and splitting the Akamai inputs across them, forwarding to the same indexers. Ensure the TA-Akamai-SIEM add-on is only installed on the HF (not the Search Head or indexers) to avoid unnecessary processing. Engage Splunk Support: Since Support reviewed the diag file, ask them to specifically analyze the TA-Akamai-SIEM modular input logs and any resource-related errors in splunkd.log. Share the timeout error and data volume details.
... View more