I'm currently looking at increasing the performance of our Splunk Search Head. I'm running a number of Apps at the request of my network engineer. However I'm noticing a number of things:
Max Current Search is at 12. It appears to be limited by the indexer (4 cores)
Accelerating Data Models isn't hitting my search head hard, but it's behind. Possibly do to limited searches/skipped searches on.
InfoSec and Palo Alto's app run about an hour behind and incredibly slow. It's kind of frustrating.
Should mention that I'm currently running Splunk Indexer and Splunk Search Head (seperate servers) in Azure. Things seem descent in Azure. And am increasing the instance. But some other things I'm thinking of doing:
Increasing the maximum concurrent searches on the indexer and search head from 3 to 4. I'm fairly optimistic the servers can handle it.
Increasing the Azure instance. Currently using Azure B4ms for the Indexer, and B8ms for the Search Head. Realizing that might not be the best configuration... pardon my previous ignorance on these topics.
Before I invest in these, I'd love to get the Splunk Communities input on all of this. I admit, Splunk is becoming very App-Heavy. Which I'm not pleased about. So any ways of increasing performance is appreciated.
Aw, one last thing. I'm still fairly new to data modeling. Though I've worked with the CIM I haven't tagged everything. I'm wondering if limiting the tags to specific Data Models would be of great benefit to performance, or just harm it.
Hi! If you want to solve your performance problems you should start by adopting the hardware requirements for Splunk. There are pages dedicated to that in docs. For Azure deployments, there is this tech brief that sheds some light on some good practices and even advice on which is the appropriate set of instances you should use depending on the size of your deployment.
Regarding CIM datamodels, you should enable acceleration only on the datamodels you are actually using and restrict the indexes each datamodel accelerates data from. The CIM app has a macro for each datamodel where you can place the specific indexes to look for tagged data. This will reduce the amount of data the acceleration searches will have to look into, lowering the run time and the chance you'll endup with skipped searches.
Regarding the "hour behind", i would make sure all your systems have configured and using the same NTP and the time zone they are set into. I've ran into customers not having NTP configured at all making it impossible to properly correlate data or having diferent time zones setup.
------------ Hope I was able to help you. If so, some karma would be appreciated.