Getting Data In

Why is my universal forwarder not forwarding all data in monitored directories?

jwquah
Path Finder

Hi all,

I'm testing a Universal Forwarder deployment - a real simple one. Basically I have a directory in a serverX that I'd like to forward to a Unix Splunk instance that's sitting on another serverY. So I set up the Splunk UF in serverX and everything is well with the setup.

However, I wanted to know if it's forwarding all the data from the directory correctly - so what I did was did multiple copies of the same directory and have it forwarded to different index (e.g. dirA -> IndexA, dirB -> IndexB, dirC -> indexC). What's interesting is the first index generally goes through, but the other indexes do not seem to be receiving data.

For example:
IndexA -> 200 events
IndexB -> 150 events
IndexC -> 0 events

I've confirmed that the data in indexB IS in indexA, but it's not indexing all of them. There are also no apparent errors in the logs of UF. Any ideas or leads would be appreciated.

Thank you.

1 Solution

jeffland
SplunkTrust
SplunkTrust

As you already mentioned, splunk uses CRCs when reading files. I could imagine your problems arise from the fact that a copy of some file will have the same CRC as the original, and though I couldn't find a definite answer on that so far, I would expect the list of known CRCs to be applied globally and not per input/per index. I'd be very happy if anyone could present further documentation on that though.

In the meantime, what you should try is to add the crcSalt option to the inputs.conf, see the copied documentation below:

crcSalt = <string>
* Use this setting to force Splunk to consume files that have matching CRCs (cyclic redundancy checks). (Splunk only performs CRC checks against the first few lines of a file. This behavior prevents Splunk from indexing the same file twice, even though you may have renamed it -- as, for example, with rolling log files. However, because the CRC is based on only the first few lines of the file, it is possible for legitimately different files to have matching CRCs, particularly if they have identical headers.)
* If set, <string> is added to the CRC.
* If set to the literal string <SOURCE> (including the angle brackets), the full directory path to the source file is added to the CRC. This ensures that each file being monitored has a unique CRC.   When crcSalt is invoked, it is usually set to <SOURCE>.
* Be cautious about using this attribute with rolling log files; it could lead to the log file being re-indexed after it has rolled. 
* Defaults to empty.

This will take care of already known CRCs because of other identical files. If that is indeed the issue here, you should get every file indexed as a duplicate in each index you specified.

To be honest, I think your method to check whether splunk works is not really thought through. No offense, but one of the important concepts in splunk is to avoid indexing duplicate data, so it's probably not a good idea to check the functionality of splunk with duplicate data, now is it? You could simply compare the raw log files' content with what splunk indexed and be happy that it works 😉

View solution in original post

kgrigsby_splunk
Splunk Employee
Splunk Employee

Splunk forwarder not forwarding all data

Problem Summary:

A customer was running 2 indexers. One failed and all logs were not being forwarded to the active indexer. Customer checked logs submitted for the indexer and a number of forwarders and the issue appeared to the customer to be occurring from only one forwarder. The other forwarders were reportedly working fine.

Working hypotheses:

  1. One working hypothesis to this issue is that there may be a network issue (possibly a firewall) that is preventing data from getting from the forwarder to the indexer.

  2. A secondary hypothesis is that there is a load-balancing issue that may be addressed with 'forceTimebasedAutoLB' setting in forwarder's outputs.conf.

How to troubleshoot:

(1.) If the deployment is either Linux or Windows, telnet from the host to the receiver to confirm whether there is an established connection, as seen below:

telnet xx.xx.xx.xx 9996 (dest. IP & dest. port) (if "connection failed" error is thrown, there is typically a firewall impeding traffic)

  • If the deployment is Linux, run 'iptables -L -n' to find detailed information of source and destination IP addresses along with any firewall ports. 'This will return the current set of rules. There can be a few rules set here even if your firewall rules haven't been applied. Just look for lines that match your given rulesets. This will give you an idea of what rules have been entered to the system.'

  • If the deployment is Solaris, run 'netstat -un -P tcp' or 'netstat -un -P udp'

(2.) If no firewalls are found, enable 'forceTimebasedAutoLB' setting in HF/UF outputs.conf and confirm that traffic is being properly load balanced between receivers.

Resolution:

  1. We confirmed that there were no firewall rules preventing data from getting to active indexer after primary indexer outage.

  2. I engaged in a WebEx call with customer in which 'forceTimebasedAutoLB' setting was discussed in an effort to cause the forwarder to send data to an active indexer in any event that an indexer experiences an outage.

  3. We tested the configuration of the 'forceTimebasedAutoLB' parameter in outputs.conf with the stoppage of a primary indexer and found expected logs that once were not being received.

  4. The customer formerly reported resolution to the issue.

0 Karma

jeffland
SplunkTrust
SplunkTrust

As you already mentioned, splunk uses CRCs when reading files. I could imagine your problems arise from the fact that a copy of some file will have the same CRC as the original, and though I couldn't find a definite answer on that so far, I would expect the list of known CRCs to be applied globally and not per input/per index. I'd be very happy if anyone could present further documentation on that though.

In the meantime, what you should try is to add the crcSalt option to the inputs.conf, see the copied documentation below:

crcSalt = <string>
* Use this setting to force Splunk to consume files that have matching CRCs (cyclic redundancy checks). (Splunk only performs CRC checks against the first few lines of a file. This behavior prevents Splunk from indexing the same file twice, even though you may have renamed it -- as, for example, with rolling log files. However, because the CRC is based on only the first few lines of the file, it is possible for legitimately different files to have matching CRCs, particularly if they have identical headers.)
* If set, <string> is added to the CRC.
* If set to the literal string <SOURCE> (including the angle brackets), the full directory path to the source file is added to the CRC. This ensures that each file being monitored has a unique CRC.   When crcSalt is invoked, it is usually set to <SOURCE>.
* Be cautious about using this attribute with rolling log files; it could lead to the log file being re-indexed after it has rolled. 
* Defaults to empty.

This will take care of already known CRCs because of other identical files. If that is indeed the issue here, you should get every file indexed as a duplicate in each index you specified.

To be honest, I think your method to check whether splunk works is not really thought through. No offense, but one of the important concepts in splunk is to avoid indexing duplicate data, so it's probably not a good idea to check the functionality of splunk with duplicate data, now is it? You could simply compare the raw log files' content with what splunk indexed and be happy that it works 😉

KenWhitesell
Path Finder

My thoughts with issues like this is always to start with the basics. ("When you hear hoof beats, think horses, not zebras")

In this case it involves double-checking all the trivial stuff:

  • Are your inputs defined identically for all of the directories? (When you talk about it monitoring a directory, I'm assuming there are multiple files in that directory, and so you want to ensure that your file name patterns are identical)
  • Are there any permissions or access rights issues? (Since you're copying files - possibly with an ID different from that being used by Splunk - it's worth checking to see that the Splunk ID has the necessary rights to access all files)
  • Are there any limits defined that might be restricting the amount of data being forwarded?

(Don't ask me why I know to look for stuff like this. 🙂

0 Karma

jwquah
Path Finder

@Ken, I've checked for those and yep, they're all correct.

Jeffland got it correct - it would seem the crcSalt is applied globally (inter-index, rather than within an index itself). I actually read a the same documentation Jeff had, sent by Splunk support, cleaned the fisheye for my forwarder, and re-setup the forwarder with crcSalt= for each directory I was monitoring and it forwarded even the duplicate.

Thank you all!

gabec09
Explorer

how many indexers do you have?

0 Karma

sk314
Builder

Splunk performs a CRC check on a small portion of the file to check if it has already been indexed. May be this is why it is not being re-indexed. Try changing the file contents or use different files.

0 Karma

jwquah
Path Finder

That's one of my thoughts as well but I was under the impression that CRC checks occur within an index, not among indexes. I wasn't able to find any documentation confirming this - does anyone have a confirmed answer?

If it does perform CRC checks among indexes, wouldn't the scenario involving indexA & indexB be off too? indexA has 200 events, while indexB has 150 events. The 150 events from indexB CAN be found in indexA, but Splunk just isn't reindexing the remaining 50.

0 Karma

sk314
Builder

True. It also could be related to the timing. When monitoring for each file/index started. Having said that, you don't really need a confirmation if you test it with different sources. Your real world use-case won't ever have same data! My 2 cents.

0 Karma

jwquah
Path Finder

That particular splunk index has 27 indexes in total (including _audit / history / main / etc.) - that shouldn't bring about such a weird forwarder behavior should it?

0 Karma

gabec09
Explorer

I mean how many indexers do you have, not individual indexes. I had a problem where I wasn't searching my cluster correctly and always missing results or having duplicates.

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...