Monitoring Splunk

Searches taking long time to show results

power12
Communicator

I have single instance splunk environment where the license is 100 gb there is another single instance using the same license .and we get data every day around 6 GB both combined . Instance A is very fast but instance B is very slow .(both  have same resources)

All searches and dashboards are really slow .For instance if I run a search to do a simple stats for 24 hrs ..it takes 25 seconds when compared to the other one which takes 2 seconds .I checked the job inspection which was showing 

dispatch.evaluate.search = 12.84
dispatch.fetch.rcp.phase_0 =7.78 

I want to know where should I start checking on the host and what are the steps to be taken 

Labels (1)
0 Karma

bowesmana
SplunkTrust
SplunkTrust

Is the data the same data or different?

What is the search in each case.

Take a look at the job inspector and job properties

https://www.splunk.com/en_us/blog/tips-and-tricks/splunk-clara-fication-job-inspector.html

Have a look at the phase0 job property in each case and also look at the LISPY in the search.log

 

0 Karma

isoutamo
SplunkTrust
SplunkTrust
Is the data stored same way on both environments? Like are there same indexes, source types, props / transforms etc. on both environment?
Are the IO resources equally on both nodes? Are there running any other stuff that splunk on those nodes?
You should setup MC on both nodes and look from it what there are happening. Start with health check part. It will tell if there are some configurations which are not based on Splunk's requirements.
0 Karma

kiran_panchavat
Contributor

@power12 

The best place to start is by analyzing

https://docs.splunk.com/Documentation/Splunk/latest/Search/ViewsearchjobpropertieswiththeJobInspecto... 

Use the https://docs.splunk.com/Documentation/Splunk/latest/DMC/DMCoverview  to check the health of Splunk.

Also use other OS related tools to troubleshoot system performance; vmstat, iostat, top, lsof to look for any processes hogging CPU, memory or any high iowait times on your disk array.

Here is a good explanation of calculating limits:

https://answers.splunk.com/answers/270544/how-to-calculate-splunk-search-concurrency-limit-f.html 

Also check out these apps:
https://splunkbase.splunk.com/app/2632/ 

Check the efficiency of your users searches. The following will show you the longest running searches (by user - run it for 24hrs)

index="_audit" action="search" (id=* OR search_id=*)
| eval user=if(user=="n/a",null(),user)
| stats max(total_run_time) as total_run_time first(user) as user by search_id
| stats count perc95(total_run_time) median(total_run_time) by user
|sort - perc95(total_run_time)

kiran_panchavat_0-1710307769437.png

 

0 Karma
Get Updates on the Splunk Community!

Enter the Splunk Community Dashboard Challenge for Your Chance to Win!

The Splunk Community Dashboard Challenge is underway! This is your chance to showcase your skills in creating ...

.conf24 | Session Scheduler is Live!!

.conf24 is happening June 11 - 14 in Las Vegas, and we are thrilled to announce that the conference catalog ...

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...