Splunk Search

Timewrap command grouping by field

n3w4z4
Explorer

Hello,

 

I've seen many others in this forum trying to achieve something similar to what I'm trying to do but I didn't find an answer that completely satisfied me.

This is the Use Case:

I want to compare the number of requests received by our Web Proxy with the same period in the last week. Then I want to filter out any increase lower than X percent.

 

This is how I've tried to implement it using the timewrap and it's pretty close to what I want to achieve. Only problem is that the timewrap command only seems to work fine if I only group by _time.

 

 

| tstats `summariesonly` count as event_count from datamodel="Web" where sourcetype="f5:bigip:ltm:http:irule" by _time span=10m
| timewrap 1w | where _time >= relative_time(now(), "-60m")
| where (event_count_latest_week - event_count_1week_before) > 0
| where (((event_count_latest_week - event_count_1week_before)/event_count_latest_week)*100) >= 40

 

 

This gives me a result like this.

_timeevent_count_1week_before_weekevent_count_latest_week
XXXXYYYYZZZZ

 

 

If I try to do something similar but grouping by the name of the web site that it's being accesed in the tstats command then timewrap command doesn't work for me anymore. It outputs just the latest values of 1 of the web sites.

 

 

 

| tstats `summariesonly` count as event_count from datamodel="Web" where sourcetype="f5:bigip:ltm:http:irule" by _time Web.site span=10m
| timewrap 1w | where _time >= relative_time(now(), "-60m")
| where (event_count_latest_week - event_count_1week_before) > 0
| where (((event_count_latest_week - event_count_1week_before)/event_count_latest_week)*100) >= 40

 

 

 

That doesn't work. Do you know why that happens and how can I achieve what I want?

 

 

Many thanks.

 

Kind regards.

 

Labels (1)
0 Karma
1 Solution

PickleRick
SplunkTrust
SplunkTrust

OK. We might need to a bit more tricky for those multivalued fields.

 

 

| tstats prestats=t `summariesonly` count from datamodel="Web" where sourcetype="f5:bigip:ltm:http:irule" by _time Web.site span=10m
| timechart span=10m count as event_count by Web.site
| timewrap span=1w series=short
| foreach *_s0
   [ eval <<MATCHSTR>>_combined=<<MATCHSTR>>_s0."|".<<MATCHSTR>>_s1 ]
| fields _time *_combined
| untable _time Web.series values
| eval values=split(values,"|")
| eval old=mvindex(values,0), new=mvindex(values,1)
| fields - values
| where (old-new)/old>0.3

 

 

View solution in original post

PickleRick
SplunkTrust
SplunkTrust

Timewrap works on output from timechart. So you need an output from timechart. To get this you need to use tstats with prestats=t option.

 

| tstats prestats=t `summariesonly` count from datamodel="Web" where sourcetype="f5:bigip:ltm:http:irule" by _time Web.site span=10m
| timechart span=10m count as event_count by Web.site

 

n3w4z4
Explorer

Thanks a lot for your answer. You are right; however, I feel this doesn't scale up well when you have too many values since it will generate "Number of Web.Site unique values" * 2 columns.... which pretty much breaks the browser.

Additionally, now that I have columns with this pattern : WebSiteExample1.event_count_1week_before, ... etc.

How could I write an expression like the one I had so I only filter the values where the difference between this week and the previous one is more than X percent?

 

I don't see an easy way to compare chunks of X minutes of data against the same time but 1 week ago. Timewrap seemed perfect for that use case but looks like it's designed only for graphical representation.

 

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Well, that's how timechart works - if you split by a field, you get several separate time series (and timewrap multiplies that of course by cutting the time range into smaller chunks).

Actually, timewrap might still be what you want, just process the results

 

| tstats prestats=t `summariesonly` count from datamodel="Web" where sourcetype="f5:bigip:ltm:http:irule" by _time Web.site span=10m
| timechart span=10m count as event_count by Web.site
| foreach *_s0
   [ eval <<MATCHSTR>>_combined=mvappend(<<MATCHSTR>>_s0,<<MATCHSTR_s1) ]
| fields _time *_combined
| untable _time Web.series values
| eval old=mvindex(values,0), new=mvindex(values,1)
| fields - values
| where (old-new)/old>0.3

 

Something like that.

There might be a better way to do it but this should work.

And remember that with timechart you might want to tweak limit and useother parameters.

EDIT: Hmm... there is something fishy about untable and multivalued fields. I'll have to investigate it further.

PickleRick
SplunkTrust
SplunkTrust

OK. We might need to a bit more tricky for those multivalued fields.

 

 

| tstats prestats=t `summariesonly` count from datamodel="Web" where sourcetype="f5:bigip:ltm:http:irule" by _time Web.site span=10m
| timechart span=10m count as event_count by Web.site
| timewrap span=1w series=short
| foreach *_s0
   [ eval <<MATCHSTR>>_combined=<<MATCHSTR>>_s0."|".<<MATCHSTR>>_s1 ]
| fields _time *_combined
| untable _time Web.series values
| eval values=split(values,"|")
| eval old=mvindex(values,0), new=mvindex(values,1)
| fields - values
| where (old-new)/old>0.3

 

 

n3w4z4
Explorer

Thanks for your answer. Running it like you provided it (plus adding the timewrap I think you forgot) It doesn't provide any output, I removed the last where just to troubleshoot it. I tried replacing your s0 and s1like this but it but again output is empty.

 

| tstats prestats=t `summariesonly` count from datamodel="Web" where sourcetype="f5:bigip:ltm:http:irule" by _time Web.site span=10m
| timechart span=10m count as event_count by Web.site useother=false limit=0
|timewrap 1w
| foreach *_
   [ eval <<MATCHSTR>>_combined=<<MATCHSTR>>_latest_week."|".<<MATCHSTR>>_1week_before_week	 ]
| fields _time *_combined
| untable _time Web.series values
| eval values=split(values,"|")
| eval old=mvindex(values,0), new=mvindex(values,1)
| fields - values

 

It looks like a complex workaround for something that sounds like a pretty standard use case to me.

Do you know any other simpler way of doing this?

 

Thanks again!

 

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Right. I seem to have skipped the timewrap while copying from my test environment.

But you miscopied the foreach.

The timewrap creates multiple series based on the same name with suffixes _s0, _s1 and so on (if there are more spans). That's what the foreach relies on. You can't just arbitrarily cut it to *_

 

n3w4z4
Explorer

Yes you are right, but I tried it correctly before that. What version of Splunk are you running? I'm on 9.1.1 and that's not how timewrap names the columns for me.

 

_timeevent_count_1week_beforeevent_count_latest_week
XXXXYYYYZZZZ

 

 

That's how it does it. If I have an span of more than 2 weeks then it will create another column ended like *_2weeks_before

 

So I changed it to something like this but still, empty output.

 

| tstats prestats=t `summariesonly` count from datamodel="Web" where sourcetype="f5:bigip:ltm:http:irule" by _time Web.site span=10m
| timechart span=10m count as event_count by Web.site useother=false limit=5
|timewrap 1w
| foreach *_latest_week
   [ eval <<MATCHSTR>>_combined=<<MATCHSTR>>_latest_week."|".<<MATCHSTR>>_1week_before_week	 ]
| fields _time *_combined
| untable _time Web.series values
| eval values=split(values,"|")
| eval old=mvindex(values,0), new=mvindex(values,1)
| fields - values

 

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Ahhhh. Right. You need to do series=short with timewrap to have that s0, s1 and so on.

n3w4z4
Explorer

Correct, it works now. Can you please edit your answer? I'll mark it as solution after that.

 

Thanks a lot!

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Done 🙂

0 Karma
Get Updates on the Splunk Community!

Take Your Breath Away with Splunk Risk-Based Alerting (RBA)

WATCH NOW!The Splunk Guide to Risk-Based Alerting is here to empower your SOC like never before. Join Haylee ...

SignalFlow: What? Why? How?

What is SignalFlow? Splunk Observability Cloud’s analytics engine, SignalFlow, opens up a world of in-depth ...

Federated Search for Amazon S3 | Key Use Cases to Streamline Compliance Workflows

Modern business operations are supported by data compliance. As regulations evolve, organizations must ...