Hi. This is regarding Splunk 5.0.11 Universal Forwarder and Heavy Forwarder.
We rebooted 2 Heavy Forwarders today (after updating glibc) and now, we are only seeing 1 of the 2 files that each of our Universal Forwarders reads and forwards.
I know the data is not getting to the Indexer (or at least is not getting indexed).
Is there a best practice for determining whether or not data is at least getting TO the heavy forwarder?
I did try adding crcSalt= in the stanza in inputs.conf on the universal forwarder that specifies the data we're missing, just in case something cropped up.
Thanks for any suggestions to help us get started with this one...
I ended up working with Splunk Support on this one. For reasons neither of us can pinpoint, it seems that rebooting the whole server that the Heavy forwarder ran on stopped proper timestamp parsing for the events from that one logfile.
Explicitly specifying the timezone resolved the issue.
I ended up working with Splunk Support on this one. For reasons neither of us can pinpoint, it seems that rebooting the whole server that the Heavy forwarder ran on stopped proper timestamp parsing for the events from that one logfile.
Explicitly specifying the timezone resolved the issue.
A little more info: I saw references to my missing sourcetype in the metrics.log on one of the source servers. Also, hitting /services/admin/inputstatus/TailingProcessor%3AFileStatus on a source server showed the files I am looking for as having been read to 100% completion.
And I checked the per_sourcetype_thruput on the indexers, and that seems to show no difference in volume from previous days. However, I absolutely do NOT see any indication in the index itself that the data is present. I even searched index=* just to be sure.
Now I'm really puzzled....