Splunk Search

Search Progress Indicator

Sriram
Communicator

Is there a way to show the status of search jobs while the search is in progress. I have a dashboard with multiple searches which populates charts in different panel. When the search response time is slow, I am not able to find out whether search is in progress or search has'nt started yet. The jobprogressindicator module isn't showing the progress bar on many occasions. (I found another thread which talks about progress bar shows up only for the main search pipeline. http://splunk-base.splunk.com/answers/22773/search-progress-bar-disappeared).

Any suggestions !!

Tags (3)
0 Karma
1 Solution

sideview
SplunkTrust
SplunkTrust

If you don't have a subsearch, or if you do but the subsearch isn't the most expensive part of the search, then the issue described in that other answers topic isn't going to be what's going on.

I'm also for the record assuming that you're also using the 'simplified' xml, ie the top level tag in your view is "".

One thought, is that perhaps you have an expensive search that's being used to populate the options of a dynamic pulldown. If this is the case, then the core splunk modules do not give any way of showing job progress. If however you switch to using the advanced xml and then also start using the Sideview Utils modules instead of the relevant core modules, this is relatively straightforward. The "Pulldown" module from sideview utils has it's own little job progress indicator built into it, and if you really need a bigger one you can use the "JobProgressIndicator" module as well.

( http://splunk-base.splunk.com/apps/36405/sideview-utils )

Can you paste in the xml of your view? either here or on something like pastebin? That will help us really provide you with more targeted feedback.

UPDATE 1:
Thanks for posting the XML. And I'm glad you're already over in the advanced XML and already using Sideview Utils. 😃

Indeed the problem here is that most of the time spent in search is actually spent within the subsearches. So it is very similar to the problem of the other answers issues you linked to.

However!! There's good news. I think it's quite possible to rewrite this search without any appends, and even if the progress-reporting weren't a desirable outcome, there's plenty of other reasons to do so. You're basically making Splunk run the same search over three different timeranges, but since the smaller timeranges are contained in the larger, we can do this much more easily (from splunkd's perspective) in a single search pipeline. And when we do so, the progress numbers will just start working.

I'll post an updated search shortly, to show you how it'll work without the appends.

UPDATE 2:

Here's the kind of search I recommend. I've left off the transpose bit at the end, so the main difference here is easier to see:

`index=statsindex sourcetype=STATSAPP $userUid$ earliest=@mon-7d

| eval timeColumn="deleteLater"
| eval timeColumn=if(_time>relative_time(now(),"@d"),timeColumn + ",Today", timeColumn)
| eval timeColumn=if(_time>relative_time(now(),"@w0"),timeColumn + ",This Week", timeColumn)
| eval timeColumn=if(_time>relative_time(now(),"@mon"),timeColumn + ",This Month", timeColumn)
| eval timeColumn = split(timeColumn,",")
| stats count(eval(EventTypeDescription="RELEASE" OR EventTypeDescription="REJECT" OR EventTypeDescription="SUSPEND" OR EventTypeDescription="CEO" OR EventTypeDescription="WITHDRAW")) AS "Worked" count(eval(EventTypeDescription="SUSPEND")) AS "Pended", count(eval(EventTypeDescription="RELEASE")) AS "Processed" by timeColumn
| search timeColumn!="deleteLater"

-- all the weird evals end up painting a multi-valued field on each event, where the values are like "Today,This Week", or "Today,This Week,This Month". Then stats knows how to carve them all up correctly.
-- Splunkd now only has to get each event off disk once, rather than having to get the same event off disk 1,2 or 3 times.
-- since it's in a single search pipeline, the numbers reported for the "progress" will reflect the true job progress, so the JobProgressIndicator module will now work.
-- It's quite likely that you were hitting search-time limits around the subsearches. When you use subsearches beware that splunkd can 'autofinalize' subsearches that take more than 30 seconds. Sometimes you'll see an error message ( http://splunk-base.splunk.com/answers/11949/auto-finalized-after-time-limit-reached ) but I'm not sure you always get an error message.

-- it's also possible that you were hitting limits around the number of rows in subsearches.
If the subsearches were matching more than 50,000 rows, I think those results will have been quietly truncated before they actually get appended. ( http://splunk-base.splunk.com/answers/30678/append-and-max-results-50000 )
-- Note that, to accomodate the somewhat slippery boundaries around "start of current month" vs "start of current week" , i've made the timerange run over "earliest=@mon-7d". This says "go back to the start of the current month, then go back 7 more days to be safe. There's some weird trickery you can do, getting a subsearch with some eval to calculate a more optimized timerange for you, but this will work. Note that you'll probably want to clip off that "(Prior to all these timeranges)" bit in the case statement.

I know it's a super complicated answer, but I hope it helps!

View solution in original post

Sriram
Communicator

Thanks for your response. I see this problem in hiddensavedsearch or pulldown as well. I am using advanced xml + sideview utils. Some queries do have subsearch. While I do have few complex queries (like the one below), the problem is sometimes the screen is so quiet for several seconds that is really difficult to predict whether the query is in progress or failed due to poor performance. Here is one query. Apart from job progress indicator are there any modules which can be used to show "loading....or please wait...." status.

sample 1:


index=statsindex sourcetype=STATSAPP $userUid$ earliest=@d latest=now | stats count(eval(EventTypeDescription="RELEASE" OR
EventTypeDescription="REJECT" OR EventTypeDescription="SUSPEND" OR EventTypeDescription="CEO" OR EventTypeDescription="WITHDRAW")) AS "Worked"
count(eval(EventTypeDescription="SUSPEND")) AS "Pended", count(eval(EventTypeDescription="RELEASE")) AS "Processed"| append [search
index=statsindex sourcetype=STATSAPP $userUid$ earliest=@w0 latest=now | stats count(eval(EventTypeDescription="RELEASE" OR
EventTypeDescription="REJECT" OR EventTypeDescription="SUSPEND" OR EventTypeDescription="CEO" OR EventTypeDescription="WITHDRAW")) AS "Worked"
count(eval(EventTypeDescription="SUSPEND")) AS "Pended", count(eval(EventTypeDescription="RELEASE")) AS "Processed" ] | append [search
index=statsindex sourcetype=STATSAPP $userUid$ earliest=-0mon@mon latest=now | stats count(eval(EventTypeDescription="RELEASE" OR
EventTypeDescription="REJECT" OR EventTypeDescription="SUSPEND" OR EventTypeDescription="CEO" OR EventTypeDescription="WITHDRAW")) AS "Worked"
count(eval(EventTypeDescription="SUSPEND")) AS "Pended", count(eval(EventTypeDescription="RELEASE")) AS "Processed"] | transpose | rename
"column" AS "Claims" "row 1" as "Today" "row 2" as "Current Week" "row 3" as "Current Month"



<![CDATA[Search String is: $search$]]>




10
results

True
False

10
results
false
false




0 Karma

sideview
SplunkTrust
SplunkTrust

If you don't have a subsearch, or if you do but the subsearch isn't the most expensive part of the search, then the issue described in that other answers topic isn't going to be what's going on.

I'm also for the record assuming that you're also using the 'simplified' xml, ie the top level tag in your view is "".

One thought, is that perhaps you have an expensive search that's being used to populate the options of a dynamic pulldown. If this is the case, then the core splunk modules do not give any way of showing job progress. If however you switch to using the advanced xml and then also start using the Sideview Utils modules instead of the relevant core modules, this is relatively straightforward. The "Pulldown" module from sideview utils has it's own little job progress indicator built into it, and if you really need a bigger one you can use the "JobProgressIndicator" module as well.

( http://splunk-base.splunk.com/apps/36405/sideview-utils )

Can you paste in the xml of your view? either here or on something like pastebin? That will help us really provide you with more targeted feedback.

UPDATE 1:
Thanks for posting the XML. And I'm glad you're already over in the advanced XML and already using Sideview Utils. 😃

Indeed the problem here is that most of the time spent in search is actually spent within the subsearches. So it is very similar to the problem of the other answers issues you linked to.

However!! There's good news. I think it's quite possible to rewrite this search without any appends, and even if the progress-reporting weren't a desirable outcome, there's plenty of other reasons to do so. You're basically making Splunk run the same search over three different timeranges, but since the smaller timeranges are contained in the larger, we can do this much more easily (from splunkd's perspective) in a single search pipeline. And when we do so, the progress numbers will just start working.

I'll post an updated search shortly, to show you how it'll work without the appends.

UPDATE 2:

Here's the kind of search I recommend. I've left off the transpose bit at the end, so the main difference here is easier to see:

`index=statsindex sourcetype=STATSAPP $userUid$ earliest=@mon-7d

| eval timeColumn="deleteLater"
| eval timeColumn=if(_time>relative_time(now(),"@d"),timeColumn + ",Today", timeColumn)
| eval timeColumn=if(_time>relative_time(now(),"@w0"),timeColumn + ",This Week", timeColumn)
| eval timeColumn=if(_time>relative_time(now(),"@mon"),timeColumn + ",This Month", timeColumn)
| eval timeColumn = split(timeColumn,",")
| stats count(eval(EventTypeDescription="RELEASE" OR EventTypeDescription="REJECT" OR EventTypeDescription="SUSPEND" OR EventTypeDescription="CEO" OR EventTypeDescription="WITHDRAW")) AS "Worked" count(eval(EventTypeDescription="SUSPEND")) AS "Pended", count(eval(EventTypeDescription="RELEASE")) AS "Processed" by timeColumn
| search timeColumn!="deleteLater"

-- all the weird evals end up painting a multi-valued field on each event, where the values are like "Today,This Week", or "Today,This Week,This Month". Then stats knows how to carve them all up correctly.
-- Splunkd now only has to get each event off disk once, rather than having to get the same event off disk 1,2 or 3 times.
-- since it's in a single search pipeline, the numbers reported for the "progress" will reflect the true job progress, so the JobProgressIndicator module will now work.
-- It's quite likely that you were hitting search-time limits around the subsearches. When you use subsearches beware that splunkd can 'autofinalize' subsearches that take more than 30 seconds. Sometimes you'll see an error message ( http://splunk-base.splunk.com/answers/11949/auto-finalized-after-time-limit-reached ) but I'm not sure you always get an error message.

-- it's also possible that you were hitting limits around the number of rows in subsearches.
If the subsearches were matching more than 50,000 rows, I think those results will have been quietly truncated before they actually get appended. ( http://splunk-base.splunk.com/answers/30678/append-and-max-results-50000 )
-- Note that, to accomodate the somewhat slippery boundaries around "start of current month" vs "start of current week" , i've made the timerange run over "earliest=@mon-7d". This says "go back to the start of the current month, then go back 7 more days to be safe. There's some weird trickery you can do, getting a subsearch with some eval to calculate a more optimized timerange for you, but this will work. Note that you'll probably want to clip off that "(Prior to all these timeranges)" bit in the case statement.

I know it's a super complicated answer, but I hope it helps!

sideview
SplunkTrust
SplunkTrust

yea... Pretty insanely complicated. Again it gets a lot simpler if you break it up as one base search and three postProcess searches, each with an HTML module to render its count.

0 Karma

Sriram
Communicator

Wow ! That was some search suggestion!! Now my search results shows zero values. My query is very bloated and has serious meat now. 🙂 I will definitely try exploring the HTML with singlevalue. I can't thank you enough for your quick and effective solution.

0 Karma

sideview
SplunkTrust
SplunkTrust

However, you might want to think of a different approach. Instead of one search yielding one table, you could have one base search (up to the big stats command basically), and then you can have three different PostProcess modules each with it's own HTML module underneath. Then you have a lot more freedom to carve up that base search differently without all these acrobatics. (replace HTML with SingleValue and PostProcess with HiddenPostProcess if you aren't using Sideview Utils)

sideview
SplunkTrust
SplunkTrust

Without changing the approach, here's the sort of search syntax you can put in immediately after our big stats command and this will guarantee that you have zeros showing up in the final results.

| append [| stats count | eval timeColumn="Today,This Week, This Month" | fields - count | eval Processed=0 | eval Worked=0 | eval Pending=0 | eval timeColumn=split(timeColumn,",") | mvexpand timeColumn] | stats sum(Processed) as Processed sum(Pending) as Pending sum(Worked) as Worked by timeColumn

Sriram
Communicator

Thanks Nick. It works great, however there is a minor issue which I am not able to figure out. If there are no events, I want the values to be default to zero. The above query basically not pulling the columns after the "stats count". For example, if No events exists for "Today", I want the chart to show "Today" and zero. In this case, "Today" itself is not showing up. Any ideas ?

0 Karma

Sriram
Communicator

I replied your question in answer section due to space limitation 🙂

0 Karma
Get Updates on the Splunk Community!

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...