Alerting

Compare two Splunk Alert search with same time span

lamnguyentt1
Explorer

Dear Professor,

I have two alert search like this

1. Search 1:

index="abc" sourcetype="abc" service.name=financing request.method="POST" request.uri="*/applications" response.status="200"
|timechart span=2m count as applicaton_today
|eval mytime=strftime(_time,"%Y-%m-%dT%H:%M")
|eval yesterday_time=strftime(_time,"%H:%M")
|fields _time,yesterday_time,applicaton_today

And here is output

1.png2. Search 2:

index="xyz" sourcetype="xyz" "Application * sent to xyz success"
|timechart span=2m count as omni_today
|eval mytime=strftime(_time,"%Y-%m-%dT%H:%M")
|eval yesterday_time=strftime(_time,"%H:%M")
|fields _time,yesterday_time,omni_today

And here is output

2.png

 3. I try to combine two search like this then calculate spike.

index="abc" sourcetype="abc" service.name=financing request.method="POST" request.uri="*/applications" response.status="200"
|timechart span=2m count as app_today
|eval mytime=strftime(_time,"%Y-%m-%dT%H:%M")
|eval yesterday_time=strftime(_time,"%H:%M")
| append [search index="xyz" sourcetype="xyz" "Application * sent to xyz"
| timechart span=2m count as omni_today]
|fields _time,yesterday_time,app_today,omni_today
|eval spike=if(omni_today < app_today AND _time <= now() - 3*60 AND _time >= relative_time(now(),"@d") + 7.5*3600, 1, 0)

Here is output

3.png

 But it shows two time span (like image). 

How can I combine two search with only time span like this.

4.PNG

 

Thank you for your help.

 

 

Labels (3)
0 Karma
1 Solution

PickleRick
SplunkTrust
SplunkTrust

Well. You simply asked splunk to append results of one search to results of another search. So splunk did it - took rows of results from one search and "glued" them to the end of another search.

You could further transform combined results but it's better to start off without the append in the first place. (subsearches have their own limits and can trigger some tricky behaviour).

Since you're doing mostly same thing with two sets of result data, you can just get all your events amd then calculate separate stats for both kinds of events.

(index="abc" sourcetype="abc" service.name=financing request.method="POST" request.uri="*/applications" response.status="200") OR ( index="xyz" sourcetype="xyz" "Application * sent to xyz")
|timechart span=2m count(eval(sourcetype="abc")) as app_today count(eval(sourcetype="xyz")) as omni_today
|eval mytime=strftime(_time,"%Y-%m-%dT%H:%M")
|eval yesterday_time=strftime(_time,"%H:%M")
|fields _time,yesterday_time,app_today,omni_today
|eval spike=if(omni_today < app_today AND _time <= now() - 3*60 AND _time >= relative_time(now(),"@d") + 7.5*3600, 1, 0)

 

View solution in original post

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Well. You simply asked splunk to append results of one search to results of another search. So splunk did it - took rows of results from one search and "glued" them to the end of another search.

You could further transform combined results but it's better to start off without the append in the first place. (subsearches have their own limits and can trigger some tricky behaviour).

Since you're doing mostly same thing with two sets of result data, you can just get all your events amd then calculate separate stats for both kinds of events.

(index="abc" sourcetype="abc" service.name=financing request.method="POST" request.uri="*/applications" response.status="200") OR ( index="xyz" sourcetype="xyz" "Application * sent to xyz")
|timechart span=2m count(eval(sourcetype="abc")) as app_today count(eval(sourcetype="xyz")) as omni_today
|eval mytime=strftime(_time,"%Y-%m-%dT%H:%M")
|eval yesterday_time=strftime(_time,"%H:%M")
|fields _time,yesterday_time,app_today,omni_today
|eval spike=if(omni_today < app_today AND _time <= now() - 3*60 AND _time >= relative_time(now(),"@d") + 7.5*3600, 1, 0)

 

0 Karma

lamnguyentt1
Explorer

Thank you for your help.

With my way, it's really easy by change "append" to "appendcols". 

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...