Splunk Search

How to troubleshoot why my forwarders have stopped forwarding most data at a certain time?

Communicator

Splunk 6.4.1

We have run into an issue on Tuesday where data for over 99 clients have just stopped presenting in the search. It looks like some of the data is reporting; however, there were 55 EventCodes displaying when I run index=* | stats count by EventCode at the time frame before the issue. Then there are only 6 EventCodes after that 1 minute time frame and since.

I have been looking at my forwarders and they are all checking in. The _internal looks okay, with no errors. The serverclass is showing all the clients and nothing has changed on the inputs.conf. Splunkd.log also looks clean during that time frame on my search heads and indexers.

I am quite green to Splunk and would appreciate any guidance on how to troubleshoot. I have also opened a Splunk case in parallel to help determine the issue.

Thank you in advance.

0 Karma
1 Solution

Communicator

This was caused due to a conflict in the transforms.conf that filtered excess sourcetypes from another index.

View solution in original post

0 Karma

Communicator

This was caused due to a conflict in the transforms.conf that filtered excess sourcetypes from another index.

View solution in original post

0 Karma

Motivator
0 Karma

Communicator

Thank you for the reference; unfortunately, this doc is not giving me a deep enough guide. The data was all there and just stopped showing for over 100 servers in a minute. I feel like this would be backend, due to the sheer volume of events not showing...

0 Karma

Motivator

Its possible but there are other customers that have a ton of value. One other thing I've done is use crcSalt= in inputs.conf.

0 Karma

Motivator

Sorry that didnt format right.... It should be crcSalt=< SOURCE > (remove the spaces around SOURCE)

0 Karma