When I upgraded my home (free) SPLUNK from 4.2 to 4.2.1, it stopped indexing a number of files in /var/log, most notably "/var/log/messages". It continued to index "/var/log/maillog" and several others, but a fair number of files in /var/log simply stopped indexing new input.
The Data Input is defined as the entire directory "/var/log" with a whitelist and a blacklist. I couldn't see anything wrong with the whitelist but I cleared it anyway -- no change. The blacklist just contained "lastlog" (a binary file).
The final indexed record was just minutes before the upgrade. I reverted back to 4.2, but that did not fix the problem, so I re-upgraded to 4.2.1.
I have searched the "_internal" index for activity involving "/var/log/messages" to look for any reason why new data is not indexed, but the only records I can find there are my own search commands.
The files in /var/log are rotated & compressed weekly on Sunday, so since the upgrade (4/18) the file grew with new entries until Sunday (4/24), then started a completely new file, but none of this is in the indexes.
I keep 4 weeks of rotated log files in /var/log, so if the indexing can be restarted somehow, all the missed data should be acquired.
I should mention that when I upgraded previously from 4.1.7 to 4.2, it appeared all my previously indexed data got blown away and I started over as if it was a new install.
Can you hit https://
That should be a good starting point to see whats going on..
There's another way to see whats happening, you can check out this blog entry by Amrit: http://blogs.splunk.com/2011/01/02/did-i-miss-christmas-2/
Basically, we just need to figure out if splunk is actually reading the file or if for some reason it marked it as not readable due to crc issue, etc.