Is there a tool available that will bombard Splunk with different types of search queries such as dense, sparse, rare etc. and return the result as how much time it took and how many events it returned?
I am planning to get this type of result: https://docs.splunk.com/Documentation/Splunk/7.2.3/Capacity/Referencehardware#Maximum_performance_ca...
I am using Splunk version 7.2.3. I tried Splunkit tool, but that was throwing some Selenium errors. Any suggestions?
If you create your own savedsearches as suggested by @jessec_splunk, you can view the stats in the audit log. For example:
index=_audit savedsearch_name=* savedsearch_name!="" info=completed host=<regexfor your heads> |stats p90(total_run_time) by savedsearch_name
A few ways you can bombard Splunk with searches and measure times:
You can script
curl calls to the Splunk Web, using the REST APIs for searches. See API information, see here: https://docs.splunk.com/Documentation/Splunk/latest/RESTUM/RESTusing. Essentially you can dynamically create concurrent sessions of calls like stated in this answer.
Use JMeter to issue the HTTP API calls. You will have better control of the concurrency (so you can adjust your load), and the performance report is provided by JMeter. And of course, because you are using Splunk, you can just send JTL files (JMeter result file) to Splunk and let it visualize everything for you.
You can also create your own saved searches (to have dense, rare, sparse). These saved searches will run on cron schedules as specified by you (say, once every minute for rare, once every 5 minutes for dense, etc.), so they will automatically be trigger to run and generate results in Splunk's
_internal index. You can then query that index for response times and event counts with something like
'search index=_internal source=*scheduler.log savedsearch_name=myperftest*