Dashboards & Visualizations

How to create a sensitive table?

maayan
Path Finder

Hi,

i want to create sensitive table. i want to show how many errors happen in average in each time interval

i wrote the following code and it works ok:

| eval time = strptime(TimeStamp, "%Y-%m-%d %H:%M:%S.%Q")
| bin span=1d time
| stats sum(SumTotalErrors) as sumErrors by time
| eval readable_time = strftime(time, "%Y-%m-%d %H:%M:%S")

| stats avg(sumErrors)


now, i want:
1. add generic loop to calculate avg for span of 1m,2m,3m,5n,1h,...
and present all in a table. i tried to replace 1d by parameter but i haven't succeed yet.

2. give option to user to insert his desired span in dashboard and calculate the avg errors for him.

how can i do that?

Thanks ,
Maayan

Labels (1)
0 Karma
1 Solution

dtburrows3
Builder

Nevermind @ITWhisperer beat me too it!



There is probably a couple ways of doing this but this seemed to work for me on my local

 

<base_search>
| eval
        time=strptime(TimeStamp, "%Y-%m-%d %H:%M:%S.%Q")
    | appendpipe
        [
            | bucket span=1m time
                | stats
                    sum(SumTotalErrors) as sumErrors
                        by time
                | eval
                    bucket_type="1 minute"
            ]
    | appendpipe
        [
            | bucket span=2m time
                | stats
                    sum(SumTotalErrors) as sumErrors
                        by time
                | eval
                    bucket_type="2 minutes"
            ]
    | appendpipe
        [
            | bucket span=3m time
                | stats
                    sum(SumTotalErrors) as sumErrors
                        by time
                | eval
                    bucket_type="3 minutes"
            ]
    | appendpipe
        [
            | bucket span=5m time
                | stats
                    sum(SumTotalErrors) as sumErrors
                        by time
                | eval
                    bucket_type="5 minutes"
            ]
    | appendpipe
        [
            | bucket span=1h time
                | stats
                    sum(SumTotalErrors) as sumErrors
                        by time
                | eval
                    bucket_type="1 hour"
            ]
    | stats
        count as sample_size,
        avg(sumErrors) as avg_sumErrors
            by bucket_type
    | eval
        "Average Error Rate (Human Readable)"=round(avg_sumErrors, 0)." Errors per ".'bucket_type'
    | addinfo
    | eval
        search_time_window_sampled=tostring('info_max_time'-'info_min_time', "duration")
    | fields - info_*_time, info_sid
    | sort 0 +sample_size

 

 Not quite a loop but I am curious about this so I will keep trying out different things. 

Output should look something like this

dtburrows3_0-1702840058491.png

As for the dashboard, you can set up an input token (dropdown) to allow the user to select a span and use that token on the 
| bucket span=$span$ time 
then do your stats command.

View solution in original post

dtburrows3
Builder

Nevermind @ITWhisperer beat me too it!



There is probably a couple ways of doing this but this seemed to work for me on my local

 

<base_search>
| eval
        time=strptime(TimeStamp, "%Y-%m-%d %H:%M:%S.%Q")
    | appendpipe
        [
            | bucket span=1m time
                | stats
                    sum(SumTotalErrors) as sumErrors
                        by time
                | eval
                    bucket_type="1 minute"
            ]
    | appendpipe
        [
            | bucket span=2m time
                | stats
                    sum(SumTotalErrors) as sumErrors
                        by time
                | eval
                    bucket_type="2 minutes"
            ]
    | appendpipe
        [
            | bucket span=3m time
                | stats
                    sum(SumTotalErrors) as sumErrors
                        by time
                | eval
                    bucket_type="3 minutes"
            ]
    | appendpipe
        [
            | bucket span=5m time
                | stats
                    sum(SumTotalErrors) as sumErrors
                        by time
                | eval
                    bucket_type="5 minutes"
            ]
    | appendpipe
        [
            | bucket span=1h time
                | stats
                    sum(SumTotalErrors) as sumErrors
                        by time
                | eval
                    bucket_type="1 hour"
            ]
    | stats
        count as sample_size,
        avg(sumErrors) as avg_sumErrors
            by bucket_type
    | eval
        "Average Error Rate (Human Readable)"=round(avg_sumErrors, 0)." Errors per ".'bucket_type'
    | addinfo
    | eval
        search_time_window_sampled=tostring('info_max_time'-'info_min_time', "duration")
    | fields - info_*_time, info_sid
    | sort 0 +sample_size

 

 Not quite a loop but I am curious about this so I will keep trying out different things. 

Output should look something like this

dtburrows3_0-1702840058491.png

As for the dashboard, you can set up an input token (dropdown) to allow the user to select a span and use that token on the 
| bucket span=$span$ time 
then do your stats command.

maayan
Path Finder

Thanks!! works!
If you succeed to do that in loop (something like loop for i in (1h,5m,2m,1m...) ) it will be great 
because the query is very long 🙂

Regarding the parameter - yes i can add drop down filter to my dashboard, i wonder if i can give the users option to insert the span number and not to provide them predefined list in the drop down filter

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

You could try something like this

| table _time SumTotalErrors
| appendpipe
    [| stats avg(SumTotalErrors) as AverageBySpan by _time
    | eval Span="1m"]
| appendpipe
    [| bin _time span=2m
    | stats avg(SumTotalErrors) as AverageBySpan by _time
    | eval Span="2m"]
| appendpipe
    [| bin _time span=3m
    | stats avg(SumTotalErrors) as AverageBySpan by _time
    | eval Span="3m"]
| appendpipe
    [| bin _time span=5m
    | stats avg(SumTotalErrors) as AverageBySpan by _time
    | eval Span="5m"]
| appendpipe
    [| bin _time span=10m
    | stats avg(SumTotalErrors) as AverageBySpan by _time
    | eval Span="10m"]
| appendpipe
    [| bin _time span=1h
    | stats avg(SumTotalErrors) as AverageBySpan by _time
    | eval Span="1h"]
| where isnotnull(AverageBySpan)

maayan
Path Finder

thanks! good solution like always 🙂

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

New This Month - Observability Updates Give Extended Visibility and Improve User ...

This month is a collection of special news! From Magic Quadrant updates to AppDynamics integrations to ...

Intro to Splunk Synthetic Monitoring

In our last post, we mentioned that the 3 key pieces of observability – metrics, logs, and traces – provide ...