Getting Data In

Splunk server stops taking client data

jgauthier
Contributor

Situation:

I log into to splunk and find that data is not present when it should be.
I log into the client machine with the issue and find it's no longer connected:
tcp 0 0 192.168.74.5:40358 192.168.74.45:9997 TIME_WAIT

I restart the client, and it doesn't change. It will not connect. I restart the server, and voila:
tcp 0 0 192.168.74.5:39908 192.168.74.45:9997 ESTABLISHED

This has been happening to me repeatedly for a few revisions of 4.2 (I've tried multiple subversions) currently I am running: Splunk 4.2.2 (build 101277)

Is this a known issue? Can I provide more information? I don't want to have to babysit my data 😞 I have about 10-15 forwarders, and I never know which one it's going to be until I am missing data.

Tags (1)
0 Karma

jgauthier
Contributor

I wanted to follow up one this question. I did open a ticket and have beenw roking with splunk since November on it. The listener stopped because the indexer was clogged.

Getting to figure out why the indexer was clogged took some time. Ultimately, we discovered the fishbucket was huge. Almost 3GB. Because of it's size, splunk was spending most of it's time searching the fishbucket. We cleaned the fishbucket and splunk started behaving normally.

Now, the reason the fishbucket got so large was because I was putting a large number of files into a sinkhole. I just wanted splunk to scoop them up and index them. As it turns out, even though the sinkhole deletes the files, splunk still recorded the data so they would not be re-indexed.
After so long, and so many files the fishbucket got so large. Splunk support has my data and is reproducing this so a bug can be logged.

I have also put in an enhancement request that will allow the ability to bypass the fishbucket.

kochera
Communicator

Thanks so far. I opened as well case with Splunk.

cheers,
Andy

0 Karma

kochera
Communicator

from an OS perspective the socket is in open (listening) but as we see in the logfile splunk is blocking. So far we do not have any clue what might be the problem. We tried to trace it but so far without success.

0 Karma

jgauthier
Contributor

Your indexer is getting bogged down. I'm not sure I can personally help much at this point. However, Splunk on Splunk was helpful in identifying the indexer load.

0 Karma

kochera
Communicator

Yes, thats exactly what we see.

0 Karma

jgauthier
Contributor

Have you checked your splunkd.log?

Specifically for something like this:
12-02-2011 08:14:06.429 -0500 INFO TcpInputProc - Stopping port : 9997
12-02-2011 08:14:06.429 -0500 INFO TcpInputProc - Stopping port : 9998
12-02-2011 08:14:06.429 -0500 WARN TcpInputProc - Stopping all listening ports. Queues blocked for more than 300 seconds

kochera
Communicator

I have the same problem on three out of our six indexers. They suddenly stop accepting connection from Universal Forwarders. A telnet to port 9997 on localhost is not working either. Our fishbucket is only 2.5 MB in size. Splunkversion is 4.2.5

cheers,
Andy

0 Karma

jgauthier
Contributor

I have opened a ticket on this issue.

0 Karma

pete42
New Member

This is also happening to me on version 4.2.3 (105575), except it is always the same one (only 2 indexers).

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...