Hello,
Yesterday I noticed that there's a substantial lag in event indexing to the default index (main). When inspecting the processes on my machine, I noticed that a single splunkd process running at 100% cpu.
I tried to restart splunk several time, but splunk gets to 100% cpu immediately and stays there.
I've opened ticket at support@splunk.com, but no response till now.
After deleting the index and re-indexing things got back to normal.
24h later I'm encountering the same problem.
The machine splunk is running on has 16 CPUs, 8GB RAM. ~200MB of data is being indexed daily, and there's no significant search activity as well.
Will appreciate any help with that.
Oren.
Issue solved.
The problem was that Splunk was monitoring more than 4000 log files which were already indexed.
In inputs.conf, we simply added ignoreOlderThan = 2d which solved the problem.
Inputs .conf file exists in the host and not in the indexer and this is for the specific host only TA
Issue solved.
The problem was that Splunk was monitoring more than 4000 log files which were already indexed.
In inputs.conf, we simply added ignoreOlderThan = 2d which solved the problem.
Hi,
I assume this is not a Splunk indexer. right ?
the inputs.conf file exists in the host and this change need to be made at the host level. this will not affect the indexing
Thanks for the solution.
Changes made....Problem went away.