Getting Data In

Splunk Universal Forwarder not monitoring files as expected?

steveirogers
Communicator

I have a directory with a list of files as follows:

  1. /var/log/xxxxx/job01_SubsLoadAdHocBC01.log
  2. /var/log/xxxxx/job01_SubsLoadDataChangeBC01.log
  3. /var/log/xxxxx/job01_SubsLoadDistributionChangeBC01.log
  4. /var/log/xxxxx/job01_SubsLoadMarketBC01.log
  5. /var/log/xxxxx/job01_SubsPrepareBC01.log
  6. /var/log/xxxxx/job01_SubsQuickJobsBC01.log
  7. /var/log/xxxxx/ScheduleSplit.log
  8. /var/log/xxxxx/job02_SubsLoadAdHocBC02.log
  9. /var/log/xxxxx/job02_SubsLoadDataChangeBC02.log
  10. /var/log/xxxxx/job02_SubsLoadDistributionChangeBC02.log
  11. /var/log/xxxxx/job02_SubsLoadMarketBC02.log
  12. /var/log/xxxxx/job02_SubsPrepareBC02.log
  13. /var/log/xxxxx/job02_SubsQuickJobsBC02.log

My inputs.conf file:

[monitor:///var/log/xxxxx]
index = test
crcSalt = 
sourcetype = test
disabled = false
blacklist = (/nodeagent|/dmgr|/ffdc)
whitelist  = (ScheduleSplit\.log$|job*\.log$)

However, none of the files are being indexed. There are events in those files for today.
I added the "crcSalt = " parameter, but that did not help.

The only relevant messages I see in the splunkd.log are as follows:
0-14-2014 15:41:56.843 -0400 WARN ulimit - Core file generation disabled

I am not sure what that means.

Any help would be appreciated. Is this a problem with the "whitelist" statement?

Steve Rogers

0 Karma
1 Solution

kml_uvce
Builder

The problem is in whitelist, it should be

whitelist = (ScheduleSplit.log|Job.*.log)$

put ""

View solution in original post

steveirogers
Communicator

Thanks to everyone who contributed answers. After using the "list monitor" command, it showed that the files in question were being monitored. The problem was in the search. The files were going to a specific index which was not included by default in the "user" role. I added that index as a default to the "user role" and the logs are now showing.

Thanks again to everyone who contributed answers.

0 Karma

jeremiahc4
Builder

Shouldn't the crcSalt have something on the other side of the "=" also? For instance, on my servers with extremely rapid log rolling I use "crcSalt = SOURCE" to help avoid missing/dropping files.

0 Karma

steveirogers
Communicator

Must be because of the brackets. I have the value of "SOURCE" on the crcSalt parameter.
Thanks.

0 Karma

jeremiahc4
Builder

Ah, it was getting eaten by the Splunk Answers gremlins 🙂

0 Karma

steveirogers
Communicator

Jeremiah - yes. I have crcSalt =
I am not sure why it dropped on the initial post.

0 Karma

kml_uvce
Builder

The problem is in whitelist, it should be

whitelist = (ScheduleSplit.log|Job.*.log)$

put ""

steveirogers
Communicator

Is there an option / utility in Splunk which shows which files / directories are being monitored?
I looked at the "S.o.S." app, but could not see what I am looking for there either.
Thanks.

0 Karma

jeremiahc4
Builder

I use the command line for this... navigate to your Splunk install directory, then type "./bin/splunk list monitors"

edit - Do this where you have the inputs.conf defined... assuming on your forwarder

0 Karma

steveirogers
Communicator

Thanks very much Jeremiahc4. The "list monitor" command showed me what I was looking for.

0 Karma

chanfoli
Builder

Something tells me to try this whitelist:

whitelist  = (ScheduleSplit\.log|job.*\.log)$
0 Karma

steveirogers
Communicator

Chanfoli - this did not work. Files are still not being picked up. Thanks.

0 Karma

somesoni2
Revered Legend

Try with this

[monitor:///var/log/xxxxx/*]
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...