Splunk Enterprise

Heavy Forwarder GUI not loading

Karthikeya
Communicator

We have recently implemented HF in our environment as a part of ingesting akamai logs to Splunk. Installed akamai add-on on HF and forwarding the logs to indexers. The thing is data is more in akamai (30k events in last 5 minutes). Today our HF GUI is very slow and not at all loading. Tried to restart but still the same. But data ingestion is still going on (checked in SH). Not sure what caused HF not to load. Splunkd is still running backend. web.conf also seems fine. Checked with Splunk support and they checked diag file and it seems fine. 

 

Below is one of the error I noticed in splunkd.log:

 

ERROR ModularInputs \[10639 TcpChannelThread\] - Argument validation for scheme = TA-Akamai-SIEM; killing process, because executing it took too long (over 30000 mse

cs.)

 

0 Karma

LAME-Creations
Path Finder
It sounds like your Heavy Forwarder (HF) is struggling to handle the high volume of Akamai logs (30k events in 5 minutes), which may be causing the GUI to become slow or unresponsive. The error in splunkd.log about the TA-Akamai-SIEM modular input timing out (exceeding 30,000 ms) suggests the modular input script is overloaded. Since data ingestion continues and splunkd is running, the issue is likely related to resource contention or configuration. Here’s how you can troubleshoot and resolve it:
  1. Check HF Resource Usage:
    • Monitor CPU, memory, and disk I/O on the HF using top or htop (Linux) or Task Manager (Windows). High resource usage could indicate the HF is overwhelmed by the Akamai log volume.
    • Use the Splunk Monitoring Console (| mcatalog) or | rest /services/server/info to check system metrics like CPU usage or memory consumption on the HF.
  2. Tune Modular Input Timeout:
    • The TA-Akamai-SIEM modular input is timing out after 30 seconds (30,000 ms). Increase the timeout in $SPLUNK_HOME/etc/apps/TA-Akamai-SIEM/local/inputs.conf:
      ini
       
      [TA-Akamai-SIEM://<input_name>]
      interval = <your_interval>
      execution_timeout = 60000  # Increase to 60 seconds
    • Restart the HF after making this change ($SPLUNK_HOME/bin/splunk restart).
  3. Optimize TA-Akamai-SIEM Configuration:
    • Check the interval setting for the Akamai input in inputs.conf. A very short interval (e.g., 60 seconds) with high data volume (30k events/5 min) could overload the modular input. Consider increasing the interval (e.g., to 300 seconds) to reduce the frequency of API calls.
    • Verify the API query filters in the TA configuration. Narrow the scope (e.g., specific Akamai configurations or event types) to reduce the data volume if possible.
  4. Address GUI Unresponsiveness:
    • The GUI slowdown may be due to splunkd prioritizing data ingestion over web requests. Check $SPLUNK_HOME/etc/system/local/web.conf for max_threads or http_port settings. Increase max_threads if it’s too low:
      ini
       
      [settings]
      max_threads = 20  # Default is 10; adjust cautiously
    • Confirm the HF’s web port (default 8000) is accessible via telnet <HF_IP> 8000 from your machine.
  5. Inspect splunkd.log Further:
    • Look for additional errors in $SPLUNK_HOME/var/log/splunk/splunkd.log related to TA-Akamai-SIEM or resource exhaustion (e.g., memory or thread limits).
    • Check for errors in $SPLUNK_HOME/var/log/splunk/web_service.log for GUI-specific issues.
  6. Scale or Offload Processing:
    • If the HF is underpowered, consider upgrading its hardware (more CPU cores or RAM) to handle the 30k events/5 min load.
    • Alternatively, distribute the load by deploying multiple HFs and splitting the Akamai inputs across them, forwarding to the same indexers.
    • Ensure the TA-Akamai-SIEM add-on is only installed on the HF (not the Search Head or indexers) to avoid unnecessary processing.
  7. Engage Splunk Support:
    • Since Support reviewed the diag file, ask them to specifically analyze the TA-Akamai-SIEM modular input logs and any resource-related errors in splunkd.log. Share the timeout error and data volume details.

livehybrid
SplunkTrust
SplunkTrust

Hi @Karthikeya 

Can you check if port 8000 is running on the host? How did you leave it with support? If they've got the diag they should have a lot more info on what the issue could be here.

What are the specs of the HF box? It does sound like it could be under pressure which could be causing issues with the UI but its too hard to say, please share as much info as possible.

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

.conf25 Global Broadcast: Don’t Miss a Moment

Hello Splunkers, .conf25 is only a click away.  Not able to make it to .conf25 in person? No worries, you can ...

Observe and Secure All Apps with Splunk

 Join Us for Our Next Tech Talk: Observe and Secure All Apps with SplunkAs organizations continue to innovate ...

What's New in Splunk Observability - August 2025

What's New We are excited to announce the latest enhancements to Splunk Observability Cloud as well as what is ...