Splunk Search

Change color of Column Chart bars based on average of last week

Bleepie
Communicator

Dear Splunk Community,

I have the following statistics table and corresponding column chart that show the amount of errors per server:

Bleepie_0-1634122623362.png

And

Bleepie_1-1634122633022.png

Right now I am changing the colors on the column chart based on a static value like so:

index="myIndex" 
host="myHostOne*" OR host="myHostTwo*" 
source="mySource" 
ERROR 
NOT WARN
CTJT*
| table host, errors
| eval errors = host | stats count by host
| eval redCount = if(count>50,count,0)
| eval yellowCount = if(count<=50 AND count>25,count,0)
| eval greenCount = if(count <=25,count,0)
| fields - count
| dedup host
| rex field=host mode=sed "s/\..*$//"
| sort host asc
| rename host AS "Servers"
| rename count AS "Foutmeldingen"


The above uses a time range of "last 24 hours". I would like to change the colors of the bars (green, yellow, red) when a certain percentage of errors has been reached, based on the average of last week.

To summarize, I would like to:

  • Somehow get the average amount of errors per server per day from the last 7 days
  • Then specify a percentage for each color (e.g if the amount of errors today are 25% more than the average of last week, make the bar red)

I have no idea on how to do this, can anyone help? Thanks in advance.

 

 

Labels (4)
0 Karma
1 Solution

ITWhisperer
SplunkTrust
SplunkTrust
index="myIndex" earliest=-7d@d latest=now
host="myHostOne*" OR host="myHostTwo*" 
source="mySource" 
ERROR 
NOT WARN
CTJT*
| bin _time span=1d
| stats count by host _time
| eventstats avg(count) as average by host 
| where _time=relative_time(now(),"@d")
| eval red=if(count>average*1.25,count,0)
| eval yellow=if(count<=average*1.25 AND count>average,count,0)
| eval green=if(count<=average,count,0)
| fields - _time count average
| rex field=host mode=sed "s/\..*$//"
| sort host asc
| rename host AS "Servers"

View solution in original post

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust
index="myIndex" earliest=-7d@d latest=@d
host="myHostOne*" OR host="myHostTwo*" 
source="mySource" 
ERROR 
NOT WARN
CTJT*
| bin _time span=1d
| stats count by host _time
| eventstats avg(count) as average by host 
| where _time=relative_time(now(),"-1d@d")
| eval red=if(count>average*1.25,count,0)
| eval yellow=if(count<=average*1.25 AND count>average,count,0)
| eval green=if(count<=average,count,0)
| fields - _time count average
| rex field=host mode=sed "s/\..*$//"
| sort host asc
| rename host AS "Servers"
0 Karma

Bleepie
Communicator

Unfortunately I get 0 results in the table as the counts are all 0, I do however get events:

Bleepie_0-1634128567770.png

Bleepie_1-1634128576492.png

Any clue?

 

 

0 Karma

Bleepie
Communicator

EDIT: I missed something, looks like this now, seems something with the date is still a bit off:

Bleepie_0-1634128743202.png

 

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

You had 

| table host, errors

which you didn't need.

The dates are based on yesterday and the last 7 days - if you want today (so far), change the latest to now() and the where clause to include data from today instead of yesterday

0 Karma

Bleepie
Communicator

How do I correctly adjust the where clause? I want the last 24 hours (to match the statistics table) but using -24h or -24h@s gives me no results found. message.

Like so:

 

 

| where _time=relative_time(now(),"-24h@s")
| where _time=relative_time(now(),"-24h")

 

 

 

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

The issue you have is the mismatch between last 24 hours and last 7 days - last 24 hours will be snapped to the hour whereas last 7 days will be snapped to the day. This makes the query more complicated, so please can you specify exactly which time periods you want to compare, including the snapping you want for each period

0 Karma

Bleepie
Communicator

Can we solve this issue if we change both the table and the chart to today so both times are days rather then a mismatch between hours and days? That would work for me. Not sure if now() works like that and if it is possible. Thoughts?

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust
index="myIndex" earliest=-7d@d latest=now
host="myHostOne*" OR host="myHostTwo*" 
source="mySource" 
ERROR 
NOT WARN
CTJT*
| bin _time span=1d
| stats count by host _time
| eventstats avg(count) as average by host 
| where _time=relative_time(now(),"@d")
| eval red=if(count>average*1.25,count,0)
| eval yellow=if(count<=average*1.25 AND count>average,count,0)
| eval green=if(count<=average,count,0)
| fields - _time count average
| rex field=host mode=sed "s/\..*$//"
| sort host asc
| rename host AS "Servers"
0 Karma

Bleepie
Communicator

Not sure if I can ask this in this topic as a solution has been marked but, with the exact same query the following works:

For today:

earliest=-7d@d latest=now
| where _time=relative_time(now(),"@d")

 For yesterday:

earliest=-7d@d latest=now
| where _time=relative_time(now(),"-1d@d")

 Even this works:

earliest=-7d@d latest=now
| where _time=relative_time(now(),"-2d@d")

But this does not work:

earliest=-1mon@mon latest=now
| where _time=relative_time(now(),"-7d@d")

The last query only a handfull of results (even less then todays). Any idea why? I am expecting hundreds/thousands of results.

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

You may have hit some limit on the number of events or amount of memory used. Do you get any errors / warnings when the job is executed?

0 Karma

Bleepie
Communicator

I got no errors. I tried changing the query like so:

 

earliest=-2d@d latest=now
| where _time=relative_time(now(),"-2d@d")

 

It results 2959 events and the following statistics:

Servers green red

010388
021100
0301243
040251
051770
061890
071930

 

When changing to 3 days:

 

earliest=-3d@d latest=now
| where _time=relative_time(now(),"-3d@d")

 

It results 3024 events.

But this is the result in statistics:

Servers green red

01110
02260
03130
04150

 

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

That looks correct

There were 2959 events between the beginning of the day before yesterday and now, and 3024 between the day before that and now 3024 - 2959 = 65 which is the number of events on that previous day, 11+26+13+15 = 65

0 Karma

Bleepie
Communicator

Aaah okay I was under the impression that when using relative_time you fetch everything in between start date and now. I was expecting all the errors counted from in between now and 3 days ago. How would I edit this query to get all errors counted from the past 7 days?

 

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

relative_time will give you an epoch date time value. You can compare _time (from the events in the pipeline) to that point in time and keep everything equal to or after that time.

| where _time>=relative_time(now(),"-7d@d")
0 Karma

Bleepie
Communicator

Much obliged, got it all working now.

0 Karma

Bleepie
Communicator

Thanks a lot 🙂

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...