Splunk Search

Why timechart with eval can't recognize fields trying to get a percentage value per day?

andreacorrie
Explorer

I am trying to get a percentage of failures per day using timechart and eval but keep getting the error:

Error in 'timechart' command: The eval expression has no fields

|streamstats dc(failures) as failed by DeviceType
| streamstats dc(total) as sessions by DeviceType
| timechart span=1d eval(100-(failed/sessions)*100) as Percent_Error by DeviceType

Where am I going wrong?

1 Solution

somesoni2
Revered Legend

Give this atry

your base search|streamstats dc(failures) as failed dc(total) as sessions by DeviceType | timechart span=1d avg(eval(100-(failed/sessions)*100)) as Percent_Error by DeviceType

View solution in original post

somesoni2
Revered Legend

Give this atry

your base search|streamstats dc(failures) as failed dc(total) as sessions by DeviceType | timechart span=1d avg(eval(100-(failed/sessions)*100)) as Percent_Error by DeviceType

andreacorrie
Explorer

Oh, ok, that makes sense. Thank you!

0 Karma

andreacorrie
Explorer

Thanks. This works but the avg bothers me. What is being averaged? Shouldn't it just be one percentage value for each day? i.e. For Monday: 500 failures / 1000 sessions = 50%

I tried swapping out avg with max and received different results. I guess I'm just confused about what is being averaged.

0 Karma

aweitzman
Motivator

You have to apply some function in timechart, since its purpose is to aggregate data over time. If you want it to just return a single value, you have to make sure that it's only aggregating one thing per time slice.

aweitzman
Motivator

Take out the timechart clause and look at your results. The avg or max operator applies to all the table entries within that day boundary associated with each DeviceType. So if there is 1 table entry per DeviceType, you're taking the avg or max of 1, which should just work. If there are multiple entries per DeviceType per day, though, you're going to have to figure out what operator to apply to those entries.

One way to help might be to filter out a specific DeviceType (add a | where DeviceType="SomeDevice" clause after the streamstats clause) to help see this more easily.

aweitzman
Motivator

You're getting this failure because you don't have a statistical function that you're charting.

It seems like what you really want to do is count total and failures on a per day per DeviceType basis, and then have a chart that shows the failure percentage per DeviceType per day. Without seeing your actual data, it's hard to say how to compute this.

However, here's an example you should be able to run locally, and then adapt to your own situation. It's something that computes the percentage of bytes retrieved by GET and POST every minute (so, per method per minute) within Splunk's internal index:

index=_internal | bucket _time span=1m | stats sum(bytes) as methodbytes by _time, method | eventstats sum(methodbytes) as totalbytes by _time | eval theperc=methodbytes/totalbytes*100 | timechart span=1m max(theperc) by method

So how this works is, the bucket command assigns a particular time to each event that is aligned with that time boundary. In your case, you'd want this to be span=1d.

The stats clause gives me the number of bytes retrieved by either GET or POST methods on a per-minute basis.

The eventstats clause creates a totalbytes column that has, for each minute, the sum of the methodbytes column for that minute. This sum goes into both the GET and POST rows for each minute.

We then use an eval to get us our percentage for each of the methods.

Finally, a timechart puts this data into our final format we want. The max command works here because there is only one percentage value per method. (In your case, this would be one percentage value per DeviceType.)

Hopefully you can use this as a starting point to get you to where you can build what you need. I would recommend building this clause by clause in your search box, just to see what kind of transformation happens at each step.

Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...