Splunk IT Service Intelligence

How do I resolve *nix devices show status "unstable"/no data shown?




I see rather strange (for me, newbie) issue with number of *nix devices.
After the UF agent install devices reported data for couple of days, but showed status "unstable".
A day later devices stopped updating in Splunk.
On devices I found an error message.

  • splunk.service - SYSV: Splunk indexer service

   Loaded: loaded (/etc/rc.d/init.d/splunk; bad; vendor preset: disabled)

   Active: inactive (dead)

     Docs: man:systemd-sysv-generator(8)

Warning: splunk.service changed on disk. Run 'systemctl daemon-reload' to reload
I found that some people experienced similar issue and fixed with update of init.d script

splunk_start() {

  echo Starting Splunk...

  ulimit -Hn 20240

  ulimit -Sn 10240
I implemented the proposed change and it did help for few days.
Now I see devices being updated in Splunk on regular bases, but reported as "unstable" and no CPU/MEMORY/DISK data being reported.

Thank you in advance 

Labels (2)
Tags (2)
0 Karma


Actually both, Not getting any data is bigger issue. but else it's important to understand the nature (and fix) an issue with client being "unstable"

0 Karma


Thank you for the reply.

  1. UF agents starts after reboot  with no issues
  2. It runs for a while (day, day and a half)
  3. During that time utilization of all monitored resources are normal.
  4. When it stopped I see messages in logs [242543 TcpOutEloop] - Cooked connection to ip=x.x.x.x:9997 timed out. 

    source = /opt/splunkforwarder/var/log/splunk/splunkd.log

Sounds like something preventing connection.
I wonder if it size of the queue. 

0 Karma

Splunk Employee
Splunk Employee


Cooked connection to ip=x.x.x.x:9997 timed out.  That would be the connection to the indexer the UF is forwarding too. Is it not attempting to connect to the next indexer in the cluster? 

Yes you might be onto something if indexers are at their limit the ingestion queues will fill up and eventually data from UFs will be blocked or at least delayed until the indexer can work. through the queues. 

Use the monitoring console to check if the indexing queues are getting blocked could be a good start. 


0 Karma

Splunk Employee
Splunk Employee


Is the problem that the affected nodes stops forwarding data? or that they are flagged as unsatable? 

Check if the affected nodes have the splunk UF running or not. To exclude whether the issue is due to the Splunk process not running or whether there is a data forwarding issue (firewall / network / ports etc). 

If the Splunk process is not started when a server is rebooted have a look here on how to enable splunk boot start https://docs.splunk.com/Documentation/Splunk/9.0.4/Admin/ConfigureSplunktostartatboottime. 

Also if these are Linux hosts note that there is a slightly different setup if the boot on start should be enabled with systemd: See https://docs.splunk.com/Documentation/Splunk/9.0.4/Admin/RunSplunkassystemdservice#Configure_systemd... 


Get Updates on the Splunk Community!

Mission Control | Explore the latest release of Splunk Mission Control (2.3)

We’re happy to announce the release of Mission Control 2.3 which includes several new and exciting features ...

Cloud Platform | Migrating your Splunk Cloud deployment to Python 3.7

Python 2.7, the last release of Python 2, reached End of Life back on January 1, 2020. As part of our larger ...

Splunk Observability Cloud | Enhancing Your Onboarding Experience with the ...

We understand that your initial experience with getting data into Splunk Observability Cloud is crucial as it ...