Monitoring Splunk

Global users having slowness in



I have Splunk installed in Paris and we are noticing a large amount of slowness in Splunk when users are logged in via WAN.
We expect some slowness as we are a longer distance away, however it is nearly unusable.

I have a lot of logic in my dashboards, so perhaps it is this that it is causing it. However is there any thing else i can do.

For example if a use logs in from NY they would nearly have to use CITRIX to get a good connection. As SPlunk is web based and most of the heavy lifting is done on the server side i am surprised.
Below is an exampe of some of the searches that one of my dashbards could be running. A could have this run 10 times in parallel in one dashboard, so on the LAN it is fast. But when we get to long distance it becomes very difficult to use.

Would i need to install part of SPLUNK on servers locally to speed this up, if so what parts and how do i do this?

      <title>Filters: Command=$MXTIMING_Command_token$ Context=$MXTIMING_Context_token$ PATH=$source_path_search_token$ User=$MXTIMING_UserName_token$ NPID=$MXTIMING_NPID_token$ TYPE=$MXTIMING_TYPE_TOKEN$ Sercives=$NICKNAME_TOKEN$ Tags=$TAG_TOKEN$</title>

        <title>TAG's Over $MAX_TIME$ and Threshold : Sercives=$NICKNAME_TOKEN$</title>
          <query>| tstats summariesonly=$summariesonly_token$ avg(MXTIMING.Elapsed) AS average FROM datamodel=MXTIMING_V2 WHERE 
AND MXTIMING.Elapsed &gt;  $MAX_TIME$ 
AND MXTIMING.source_path IN ($source_path_search_token$) 
AND MXTIMING.UserName2 IN ($MXTIMING_UserName_token$)
AND MXTIMING.Context+Command IN ($MXTIMING_Context_token$) 
AND MXTIMING.Context+Command IN ($MXTIMING_Command_token$)
GROUPBY MXTIMING.Context+Command MXTIMING.NPID MXTIMING.Time | rename MXTIMING.Context+Command as Context+Command  |rename MXTIMING.NPID as NPID 

| join NPID [| tstats summariesonly=$summariesonly_token$ count(SERVICE.NPID) AS count2 FROM datamodel=SERVICE WHERE ( host=$host_token$ earliest=@w1)  
GROUPBY SERVICE.NICKNAME SERVICE.NPID  | rename SERVICE.NPID AS NPID | rename SERVICE.NICKNAME AS NICKNAME ] | lookup MXTIMING_lookup_test Context_Command AS "Context+Command" OUTPUT Tags CC_Description Threshold Alert  |  search |where average > Threshold OR isnull('Threshold') | fillnull Tags | eval Tags=if(Tags=0,"NO_TAG",Tags) | eval Tags=split(Tags,",")|stats count(average) as count by Tags | sort Tags | append [| inputlookup stars.csv | table Column1 Column2 | rename Column1 as Tags | rename Column2 as count] | sort Tags</query>

Robert Lynch

0 Karma
1 Solution


To be honest this issues is not happening anymore, after an upgrade to 7.0.3. Perhaps something changed

View solution in original post

0 Karma


To be honest this issues is not happening anymore, after an upgrade to 7.0.3. Perhaps something changed

0 Karma

Esteemed Legend

Splunk searches are only as fast as the slowest indexer. If your Search Head has been recently peered to even just a single Indexer that is poorly performing, it is going to slow down all searches for everyone. The Search Head will get fast results from all the other Indexers but will wait and wait and wait for that one Indexer to send his results back. As far as users being far away from the Search Head, this should not make any difference at all.

0 Karma


At the moment i have one indexer and one search head.
To be honest this issues is not happening anymore, after an upgrade to 7.0.3. Perhaps something changed

0 Karma
Get Updates on the Splunk Community!

.conf24 | Day 0

Hello Splunk Community! My name is Chris, and I'm based in Canberra, Australia's capital, and I travelled for ...

Enhance Security Visibility with Splunk Enterprise Security 7.1 through Threat ...

 (view in My Videos)Struggling with alert fatigue, lack of context, and prioritization around security ...

Troubleshooting the OpenTelemetry Collector

  In this tech talk, you’ll learn how to troubleshoot the OpenTelemetry collector - from checking the ...