I'm running Splunk 6.2, with the setup of 1 Search Head and 3 Indexers.
the users have been complaining for a while for slow response during searches.
The instances are running on VM's, each of them got 8 CPU cores and 16 GB of RAM - yet I think that maybe it is possible to change the setting that Splunk will use most of it since it doesn't !
the indexers are collecting data for up to 2 months, total of less than 10GB per day (all of them together).
any suggestions ? how do I test the speed ? how do I make it faster ?
The critical factor in a virtual environment is I/O rate. Each indexer should be on a separate host system to avoid creating too much contention for disk. Also, the indexers should not be on the same host as a VM running a database server, for the same reason.
How many users are running searches? With 8 CPUs on your SH, you can support 8 simultaneous searches. Anything beyond that will queue and appear to be slow.
thanks for your help!
each indexer is on a separate host indeed which doesn't run any database.
there are about 3-4 users running searches at the same time
so - each CPU is dedicated per search ? I've never seeen the CPU's even reach 100% - should I add more CPUs ?
should I use more ram by the way ? if so how ?
is there a way to run performance test on splunk ?
You have enough CPU so no need to add more. More memory is rarely a bad thing, but I doubt that's much of a factor here. Disk I/O is key. Find out what kind of disk you're using and what sort of applications share it. Any application that hits the disk hard will slow down the indexers.
If you're not already, install the Splunk On Splunk app so see how your system is performing.
I do have splunk on splunk on the search head - didn't understand how to use it to be honest - or at least what to look for this manner.
we have Netapp storage - do you need the specific model ?
SOS will confirm CPU and memory are not bottlenecks. That leaves disk, which SOS does not address. Talk to your VM admin. He should be able to look at his console to see what kind of disk rate you're getting. He should also be able to tweak the config to improve performance.
I have not done that before so I can't really help with performance measurements.
All good recommendations above. If you would like to pull specific performance stats of the searches... they can be available from your rest API call. Just run in your search...
if you have admin access to your servers you can run Bonnie++ to gather the server side performance stats.
Start with a search for long jobs like this:
|rest /services/search/jobs | sort 0 - performance_command_addinfo_duration_secs
Also keep any eye on stuff in the Monitoring Console: