I am monitoring files in a directory which Splunk pulls into an index when new files show up in the directory. We ran the script which updated the files in the directory, but the index only has old data - no new data is being pulled into the index. This has been working for weeks, but quit working when our /opt directory filled up. We shutdown Splunk, resolved the space issue with /opt, and restarted Splunk. Since the restart, no new data.
How can I troubleshoot this issue to determine why the new data is not being pulled into the index?
In addition to the above suggestion, I would recommend using the btool to re-evaluate inputs.conf.
The order of inputs stanzas matter.
In my case, I uploaded an add-on for Unix and Linux, and due to the way Splunk aggregates the inputs, it didn't reach the stanzas I put in because it was hitting an earlier stanza and putting the data into another index.
In the end, just make sure your aggregated stanza order, for inputs.conf, is not interfering with your intentions, if you can't find your data or you find it in the wrong index.
I thought of another thing to check...
If the user running splunk is not root... what permissions does the user have?
I ran into a scenario where I deployed my forwarders as root, but, my SH, IDX, and HF's as splunk.... so, while all of the other boxes were reporting their/var/log/*... my splunk infrastructure was not sending logs due to permissions and not just inputs.conf sequencing.
Sorry to muddy the waters... but, it's a variable.
On the Splunk instance that is monitoring the files, navigate to the $SPLUNK_HOME/etc directory and edit the file:
modify the following settings and change INFO to DEBUG
save the file.
Restart the Splunk instance.
Take a look at the log: $SPLUNK_HOME/var/log/splunk/splunkd.log
Look for the names of the files you were monitoring, the debug information should tell you why they were skipped.