Splunk Search

Trendline won't generate

zacksoft
Contributor

I wanted to build a trendline of my hosts response_time over _time.
But it won't generate

source=my_perf
AND (host=A OR host=B OR host=C OR host=D OR host=E)
| base query
| trendline sma4(response_time) AS resp_time

I just want to show the trendline for one host atleast. If it is possible to get it for all in one graph, that would be even awesome.

Tags (1)
0 Karma
1 Solution

DalJeanis
Legend

The part you left out (marked base query ) is needed to give you good advice.

Presumably, you are using a timechart to calculate the response_time for each unit of time. that would look perhaps like this

source=my_perf AND (host=A OR host=B OR host=C OR host=D OR host=E)
| fields host response_time
| timechart span=1m avg(response_time) by host

The trick to remember here is that after timechart, the variables are named after the hosts. In this case the records will each look like they came out of this command...

| table _time A B C D E 

...so now, to add a trend for host named A, you need a command like this...

| trendline sma4(A) as A_trend

... and if you want one for each, then you need to repeat that line for each host name in the query.


However, if you want the trend of the average, then we need to do some other magic.

On the one hand, you could use an untable command after the timechart and before the trendline, then calculate the average for each _time, then use xyseries to put them back together. However esoteric and cool that method might be, it seems a bit clumsy. Do that only if you need the average of the host response time averages, rather than an average of all transactions without regard to which host they were processed on.

What I'd do instead, is go back BEFORE the timechart, and duplicate each record with a host name of "Average". That way, the timechart will create a field that calculates the average response time for all transactions across all the hosts.

source=my_perf AND (host=A OR host=B OR host=C OR host=D OR host=E)
| fields host response_time
| eval myfan=mvrange(0,2)
| mvexpand myfan
| eval host=if(myfan=0,host,"Average")
| timechart span=1m avg(response_time) by host
| trendline sma4(Average) as Average_trend

View solution in original post

DalJeanis
Legend

The part you left out (marked base query ) is needed to give you good advice.

Presumably, you are using a timechart to calculate the response_time for each unit of time. that would look perhaps like this

source=my_perf AND (host=A OR host=B OR host=C OR host=D OR host=E)
| fields host response_time
| timechart span=1m avg(response_time) by host

The trick to remember here is that after timechart, the variables are named after the hosts. In this case the records will each look like they came out of this command...

| table _time A B C D E 

...so now, to add a trend for host named A, you need a command like this...

| trendline sma4(A) as A_trend

... and if you want one for each, then you need to repeat that line for each host name in the query.


However, if you want the trend of the average, then we need to do some other magic.

On the one hand, you could use an untable command after the timechart and before the trendline, then calculate the average for each _time, then use xyseries to put them back together. However esoteric and cool that method might be, it seems a bit clumsy. Do that only if you need the average of the host response time averages, rather than an average of all transactions without regard to which host they were processed on.

What I'd do instead, is go back BEFORE the timechart, and duplicate each record with a host name of "Average". That way, the timechart will create a field that calculates the average response time for all transactions across all the hosts.

source=my_perf AND (host=A OR host=B OR host=C OR host=D OR host=E)
| fields host response_time
| eval myfan=mvrange(0,2)
| mvexpand myfan
| eval host=if(myfan=0,host,"Average")
| timechart span=1m avg(response_time) by host
| trendline sma4(Average) as Average_trend

kmaron
Motivator

are you getting an error message that you can share?

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...