I have single instance splunk environment where the license is 100 gb there is another single instance using the same license .and we get data every day around 6 GB both combined . Instance A is very fast but instance B is very slow .(both have same resources)
All searches and dashboards are really slow .For instance if I run a search to do a simple stats for 24 hrs ..it takes 25 seconds when compared to the other one which takes 2 seconds .I checked the job inspection which was showing
dispatch.evaluate.search = 12.84
dispatch.fetch.rcp.phase_0 =7.78
I want to know where should I start checking on the host and what are the steps to be taken
Is the data the same data or different?
What is the search in each case.
Take a look at the job inspector and job properties
https://www.splunk.com/en_us/blog/tips-and-tricks/splunk-clara-fication-job-inspector.html
Have a look at the phase0 job property in each case and also look at the LISPY in the search.log
The best place to start is by analyzing
Use the https://docs.splunk.com/Documentation/Splunk/latest/DMC/DMCoverview to check the health of Splunk.
Also use other OS related tools to troubleshoot system performance; vmstat, iostat, top, lsof to look for any processes hogging CPU, memory or any high iowait times on your disk array.
Here is a good explanation of calculating limits:
https://answers.splunk.com/answers/270544/how-to-calculate-splunk-search-concurrency-limit-f.html
Also check out these apps:
https://splunkbase.splunk.com/app/2632/
Check the efficiency of your users searches. The following will show you the longest running searches (by user - run it for 24hrs)
index="_audit" action="search" (id=* OR search_id=*)
| eval user=if(user=="n/a",null(),user)
| stats max(total_run_time) as total_run_time first(user) as user by search_id
| stats count perc95(total_run_time) median(total_run_time) by user
|sort - perc95(total_run_time)