Thank you in advance for any help here, I'm ripping out my hair trying to figure this one out. About a week ago, our Splunk instance stopped indexing it's internal events, i.e. the internal indexes are suddenly showing zero events indexed over all time. I noticed this when checking the 30 day license usage dashboard and saw that none of the panels were filling with data. After running a couple quick searches on _internal, _audit, etc. I noticed that these indexes suddenly have zero events. I checked permissions (I am an admin as is) and there are no permissions discrepancies for the admin role and the internal indexes, and this is our only instance, meaning this instance is not forwarding events off to any other Splunk server.
We did check the internal logs of our RedHat server, and for some reason, at some point Splunk started to run as root as opposed to our "splunk" service account user. As a result, I saw that all the hot internal index db's were suddenly owned by root and I thought maybe we had a Linux permission issue at hand, but even after changing the ownership to splunk and assuring we were indeed running as splunk, we still have no internal events indexed. I should mention that our splunk service account is pulled from AD, and is not local to the machine. Any insight into where else I can look here? Like I said, ripping my hair out in confusion so I greatly appreciate the advice in advance!
Check again.
After that, check splunk's internal logs in $SPLUNK_HOME/var/log/splunk/splunkd.log
.
Did you manage to resolve this? I am currently facing the exact issues as described above. Please let me know if you manage to find a solution to this! Thanks in advance!
As Martin said, you may have some splunk file owned by root, and splunk cannot read them anymore.
Make sure you've chown'd all files in your splunk install from root to splunk.
Oh yes don't worry we definitely did that 🙂