We have a small Splunk environment with search head and indexer on the same instance and server. Lately, we have been creating more reports and alerts. Usually, we don't have any performance issues but when the reports run, the CPU usage is going to around 90%. They run once every day at midnights. ANy recommendations or workarounds for this issue.
Note - We cannot add CPU cores.
Stagger your reports so they don't all run at midnight. Set them on cron schedules to run 11:50, 11:55, 12:05, etc.
Also, make sure your searches are optimized. (i.e. you are specifiying your index in every search, not search All Time, not searching Real time, etc.)
Ideal solution would be that you build/upgrade your instance to have sufficient h/w to manage your workload. Only other thing you could do is that you optimize the report search.
What is the specification of the server - does it match the recommended hardware here: https://docs.splunk.com/Documentation/Splunk/7.2.4/Capacity/Referencehardware
I can see a couple of warnings. Does changing these help in the performance improvement
1) One or more Splunk instances are running on a host that has one or more resource limits set below official recommendations.
ulimits.open_files (current / recommended) ulimits.user_processes (current / recommended)
4096 / 8192 1024 / 8192
2) One or more Splunk instances are running on a host that has kernel transparent huge pages enabled. This can significantly reduce performance and is against best practice.
transparent_hugepages.enabled transparent_hugepages.defrag state
always always bad