Splunk ITSI

How do I resolve *nix devices show status "unstable"/no data shown?

bigll
Path Finder

Hi.

 

I see rather strange (for me, newbie) issue with number of *nix devices.
After the UF agent install devices reported data for couple of days, but showed status "unstable".
A day later devices stopped updating in Splunk.
On devices I found an error message.

  • splunk.service - SYSV: Splunk indexer service

   Loaded: loaded (/etc/rc.d/init.d/splunk; bad; vendor preset: disabled)

   Active: inactive (dead)

     Docs: man:systemd-sysv-generator(8)

Warning: splunk.service changed on disk. Run 'systemctl daemon-reload' to reload
--------------
I found that some people experienced similar issue and fixed with update of init.d script
-------------

splunk_start() {

  echo Starting Splunk...

  ulimit -Hn 20240

  ulimit -Sn 10240
----------------
I implemented the proposed change and it did help for few days.
Now I see devices being updated in Splunk on regular bases, but reported as "unstable" and no CPU/MEMORY/DISK data being reported.

Thank you in advance 

Labels (2)
Tags (2)
0 Karma

bigll
Path Finder

Actually both, Not getting any data is bigger issue. but else it's important to understand the nature (and fix) an issue with client being "unstable"

0 Karma

bigll
Path Finder

Thank you for the reply.

  1. UF agents starts after reboot  with no issues
  2. It runs for a while (day, day and a half)
  3. During that time utilization of all monitored resources are normal.
  4. When it stopped I see messages in logs [242543 TcpOutEloop] - Cooked connection to ip=x.x.x.x:9997 timed out. 

    source = /opt/splunkforwarder/var/log/splunk/splunkd.log

Sounds like something preventing connection.
I wonder if it size of the queue. 

0 Karma

srauhala_splunk
Splunk Employee
Splunk Employee

Hi! 

Cooked connection to ip=x.x.x.x:9997 timed out.  That would be the connection to the indexer the UF is forwarding too. Is it not attempting to connect to the next indexer in the cluster? 

Yes you might be onto something if indexers are at their limit the ingestion queues will fill up and eventually data from UFs will be blocked or at least delayed until the indexer can work. through the queues. 

Use the monitoring console to check if the indexing queues are getting blocked could be a good start. 

/Seb 

0 Karma

srauhala_splunk
Splunk Employee
Splunk Employee

Hi! 

Is the problem that the affected nodes stops forwarding data? or that they are flagged as unsatable? 

Check if the affected nodes have the splunk UF running or not. To exclude whether the issue is due to the Splunk process not running or whether there is a data forwarding issue (firewall / network / ports etc). 

If the Splunk process is not started when a server is rebooted have a look here on how to enable splunk boot start https://docs.splunk.com/Documentation/Splunk/9.0.4/Admin/ConfigureSplunktostartatboottime. 

Also if these are Linux hosts note that there is a slightly different setup if the boot on start should be enabled with systemd: See https://docs.splunk.com/Documentation/Splunk/9.0.4/Admin/RunSplunkassystemdservice#Configure_systemd... 

/Seb 

Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...