Monitoring Splunk

Time Graph question regarding license usage as the day goes on

jeck11
Path Finder

I am new to my admin role and need to get a better handle on our usage as the day goes on. We're always close to our license cap so I need to have the operators visually see if we're trending to breach our daily license usage. Here is an example I quickly threw together in Paint (I know it sucks and is busy) to illustrate what I'm after.

Let's pretend it's currently 6pm on Thursday.
Here's my legend:
Red line at 300 GB would show our daily limit
Orange represents the usage the prior Thursday.
Blue is my usage today as the day goes on.
Yellow is my trendline roughly predicting if we're going to breach.

I have been trying to use the out of the box "License Usage Dashboard" but that is good at showing me what caused me to break my cap the prior day. I need to have something to let me know I need to stop from breaking it today.

my crapy example

Tags (1)
0 Karma

woodcock
Esteemed Legend

I believe that this came from Christopher Boggs:

index=_internal source=*license_usage.log* type=Usage
| timechart span=1h sum(b) AS volume_b 
| predict algorithm=LLP period=24 volume_b as prediction future_timespan=24
| addinfo 
| where _time>=relative_time(info_max_time, "@d") AND _time<relative_time(info_max_time, "+d@d") 
| fields - info*
| eval merged = coalesce(volume_b, prediction) 
| stats sum(merged) as predicted_volume sum(volume_b) as volume_so_far 
| eval volume_so_far=round(volume_so_far/1024/1024/1024,2)
| eval predicted_volume=round(predicted_volume/1024/1024/1024,2) 
0 Karma

davidwholland
New Member

Hi.

I think this is a start of what you're looking for (FYI, It does require the datamodel provided by Meta Woot: https://splunkbase.splunk.com/app/2949/ )

| pivot Meta_Woot_License_Usage License_Usage sum(gb) AS "LICGB" SPLITROW _time AS _time PERIOD 15m SORT 0 _time ROWSUMMARY 0 COLSUMMARY 0 SHOWOTHER 1
| accum LICGB as TOTALGB
| timechart span=15m avg(TOTALGB)

Set the time period selector for "Today"

It does have the advantage of being #@! fast.... Searching the _internal index here @ work for license events returns some 13+ million events here, and isn't a useful way to go about querying license utilization, IMO.

You'll have to do some "| append" search commands to add the "week ago" results, and one for the stacksize as well. (Personally, I'd hard code that value.)

I have approximately zero idea how to add in a trend line... perhaps someone else can figure out how to add that one, and volunteer details. I've never had to use the trendline statement before.

I'll post further details if I get anywhere else... I've got a variation that spits out a "usage over a week, for the last 3 weeks" report, but its to painful to run real time due to the number of licensing events involved. This is something I wouldn't mind seeing as well...

David

PS: This works on 7.0.2 (yes, I know I need to upgrade - working on it.)

0 Karma

davidwholland
New Member

This is closer... (Do the advanced time search of -7d@d to "now")

Change STACKSZ as appropriate. Still no trending tho... I don't seem to be able to get it to work..

I use the chart command instead of timechart as the timechart seems to want to graph all 7 days, even though the if() statement reduces the event times to only 24 hours.

| pivot Meta_Woot_License_Usage License_Usage sum(gb) AS "LICGB" SPLITROW _time AS _time PERIOD minute SORT 0 _time ROWSUMMARY 0 COLSUMMARY 0 SHOWOTHER 1
| eval evt_dow=strftime(_time, "%A")
| eval cur_dow=strftime(now(),"%A")
| where evt_dow=cur_dow
| eval evt_date=strftime(_time, "%d")
| eval my_time=if( ( now() - _time  ) > 86400, _time + (86400 * 7 ), _time )
| streamstats sum(LICGB) AS TOTALGB BY evt_date
| eval TOTALGB{evt_date}=TOTALGB
| fields - TOTALGB
| eval STACKSZ=1000
|chart max(TOTALGB*),max(STACKSZ) by my_time

Sigh, no convenient way to add a screenie, you'll have to trust me that its working. 🙂

David

0 Karma

Sukisen1981
Champion

hi @jeck11

Below is the complete code,points to be noted
I have removed a split by index (removed by clause idx in timecharts) so this gives values for ALL indexes.
test this first, then we can decide if you need a 'top' indexes or indexwise split

index=_internal source=*license_usage.log* idx| timechart span=1h avg(b) 
| fillnull value=0| timewrap 1week
| eval day=strftime(_time, "%A")
| eval today=strftime(now(), "%A")
| where day=today| eval date=strftime(_time,"%d")
| eventstats max(_time) as maxtime
| eval max_date=strftime(maxtime,"%d")
| where date=max_date
| fields _time,*latest_week 
| rename avg(b)_latest_week as crnt_wk
| append
    [search index=_internal source=*license_usage.log* idx| timechart span=1h avg(b)
| fillnull value=0| timewrap 1week
| eval day=strftime(_time, "%A")
| eval today=strftime(now(), "%A")
| where day=today| eval date=strftime(_time,"%d")
| eventstats max(_time) as maxtime
| eval max_date=strftime(maxtime,"%d")
| where date!=max_date
| fields _time,*latest_week
    | rename avg(b)_latest_week as lst_wk]
| eval Threshold=3000
| eval time=strftime(_time,"%H")
| eval _time=time
| fields - _time
| fillnull value=0
| fields time,crnt_wk,lst_wk,Threshold
0 Karma

Sukisen1981
Champion

hi @jeck11
many apologies, adding the comment corrupted the code. I am posting this as an answer just to preserve the code, it is just a starter.

 index=_internal [`set_local_host`] source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1h | stats sum(b) as b by _time, pool, s, st, h, idx   | timechart span=1h sum(b) AS volumeB by idx cont=f

the values of the index are in KBs, you need to pipe

| foreach * [eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024,3)]

to see in GB.please try and confirm back

0 Karma

jeck11
Path Finder

Nope. I'm supposed to select last 7 days, correct?

0 Karma

Sukisen1981
Champion

yes plz select last 7 days from the time picker
and btw which splunk version are you on ?

0 Karma

Sukisen1981
Champion

if i run just this index=_internal source=*license_usage.log* idx
| timechart span=5min avg(b) by idx

it shows me the avg size by index in KB

0 Karma

Sukisen1981
Champion

what i wanted you to verify was the the size that it gives for the respective indexes, is it more or less matching with your expectations? we can build the graphs/timecharts later, but firstly our base data has to be correct

0 Karma

jeck11
Path Finder

yes, that one is pulling back the correct table of values. My version of SplunkCloud is 7.0.11.1 it looks like.

0 Karma

Sukisen1981
Champion

hi @jeck11
I am sorry its very late here (IST), but i think we are almost done
try this
index=_internal source=license_usage.log idx
| timechart span=1h avg(b) by idx
| fillnull value=0| timewrap 1week
| eval day=strftime(_time, "%A")
| eval today=strftime(now(), "%A")
| where day=today

This should give you the line graphs in blue and orange, that is today vs same day in the last week.
I will come up with the trend tomorrow, meanwhile I want you to ponder how you want the dashboard to be?
If you have 5-10 indexes it is better to go for individual charts dedicated to an index each OR do you just wanna sum up the indexes and have 1 chart on your overall splunk daily indexing limit?
It is not wise to plot all indexes in one chart, both from the coding perspective as well as from a readability perspective.
I suggest going for option2, what we care about really is to have the sum of data indexed against your overall daily index limit, please suggest and please feel free to upvote my answer/comment if it has helped you significantly so far.
But now I mist catch some sleep, I will look into this again tomorrow morning , hopefully your thoughts / inputs will be shared by then

0 Karma

Sukisen1981
Champion

hi @jeck11

just rehashed a bit of the code from the license usage dashboard - its NOT your exact fit, but is good for starters.
Asuuming you ill run this for the last 7 days
index=_internal [set_local_host] source=license_usage.log type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1d | stats sum(b) as b by _time, pool, s, st, h, idx | timechart span=1d sum(b) AS volumeB by idx fixedrange=false | join type=outer _time [search index=_internal [set_local_host] source=license_usage.log type="RolloverSummary" earliest=-30d@d | eval _time=_time - 43200 | bin _time span=1d | stats latest(stacksz) AS "stack size" by _time] | fields - _timediff | foreach * [eval <>=round('<>'/1024/1024/1024, 3)]
`

0 Karma

jeck11
Path Finder

It appears I'm missing something. I got no results for your search.
error screen

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...