Splunk Search

How to create a variable gauge range

Path Finder

Hey folks. I have an app which changes throughput as you might imagine. I want to use a gauge to measure the rate of submissions. The question I have is how do I create a usage of "gauge" where the range values submitted vary with the amount of throughput.

for instance today in a 60min period there are values which look like this. Full is the column i'm interested in. I might create a gauge statement like - ... | gauge count 0 500 1000 1500 2000

_time                           full
1   8/10/11 3:40:00.000 PM  827
2   8/10/11 3:50:00.000 PM  994
3   8/10/11 4:00:00.000 PM  980
4   8/10/11 4:10:00.000 PM  1027
5   8/10/11 4:20:00.000 PM  982
6   8/10/11 4:30:00.000 PM  1020
7   8/10/11 4:40:00.000 PM  321 

However a Day from now the full column can look like this (yes I just added a zero):

_time                           full
1   8/10/11 3:40:00.000 PM  8270
2   8/10/11 3:50:00.000 PM  9940
3   8/10/11 4:00:00.000 PM  9800
4   8/10/11 4:10:00.000 PM  10270
5   8/10/11 4:20:00.000 PM  9820
6   8/10/11 4:30:00.000 PM  10200
7   8/10/11 4:40:00.000 PM  3210

The point is my previous gauge statement will be grossly out of range. Is there a way to parameterize the gauge statement so it matches an expected or possible peak. A dynamic setting isn't practical because the gauge ranges will shift with each update.

I've tried all kinds of garbage queries where I search over a day then using combinations of streamstats and append to pass my dayCount variable down the pipeline to the gauge statement.

something like this (fails miserably), but hopefully the pseudo code nature will help understand what i'm trying to accomplish -

source=*blobmetrics* savetype=full earliest=@d lastest=now | streamstats max(count) as dayCount | append [ search source=*blobmetrics* savetype=full earliest=@h latest=now | streamstats max(count) by savetype | eval y2=round(dayCount/3) | eval y3=round((2/3)*dayCount) | stats count by savetype] | gauge count 0 y2 y3
Tags (2)


It's an easy thing to overthink. The solution is to not use the gauge command at all.

The gauge command will create fields called x, y1, y2, y3, etc... but as you know you can just create them yourself with eval.

So if you're already creating those fields and they look right then you're extremely close. Just delete the gauge command entirely, add | rename count as x and you're there.

However I also think there are some other things wrong here. I think you want stats and not streamstats, I think you want 'count' instead of 'max(count)' ? Some other weirdnesses in there that I cleaned up here:

source=*blobmetrics* savetype=full earliest=@d latest=now | stats count as dayCount | appendcols [ search source=*blobmetrics* savetype=full earliest=@h latest=now | stats count as x ] | eval y1=round(dayCount/3) | eval y2=round((2/3)*dayCount) | fields x,y1,y2

and if you're still with me, there are ways to do this kind of search without using the append command at all, even though it's searching two time ranges. Since you're subject to some tricky limits with the subsearch there, getting rid of the append can be good.

source=*blobmetrics* savetype=full earliest=@d latest=now | eval isCurrentHour=if(_time>relative_time(now(), "@h"),"yes","no") | eval foo=1 | chart count over foo by isCurrentHour | fields - foo | eval x=yes | eval y1=round((1/3)*(yes+no)) | eval y2=round((2/3)*(yes+no))

Path Finder

Yep i'm with ya dude. And I see you've employed my good friend relative_time. I too am not a big fan of append or appendcols. I've run into major problems sorting "appended" results so I've shied away from it's use. Hence relative_time. I will certainly muck with this. I only chose the gauge function because i'm looking for pretty graphics for manager types;-)... Thanks for the reminder about the coolness of the if-relative time combo...

0 Karma