HI ,
I have an urgent issue please help
I want to generate a scheduled alert at every 30 minutes, which will have the following :-
Count of logs ingested in current 30 minutes, Count of logs ingested in previous 30 minutes , count of logs in a day, count of logs in a week
how can i proceed with the same.
Kindly help .
This would probably best be done as part of a summary index
but you can get reasonable performance like this:
I have an urgent issue please help
I want to generate a scheduled alert at every 30 minutes, which will have the following :-
Count of logs ingested in current 30 minutes, Count of logs ingested in previous 30 minutes , count of logs in a day, count of logs in a week
| tstats count WHERE index=* OR index=_* earliest=-7d BY _time span=30m
| multireport
[ where _time >= relative_time(now(), "-30m")
| stats sum(count) AS countLast30minutes ]
[ where _time >= relative_time(now(), "-60m") AND _time <= relative_time(now(), "-30m")
| stats sum(count) AS countPrev30minutes ]
[ where _time >= relative_time(now(), "@d")
| stats sum(count) AS countLast30minutes ]
[ stats sum(count) AS countLast7days ]
Try something like this
index=foo sourcetype=bar earliest=-1w latest=now
| eval Last30Min=case(_time>=relative_time(now(),"-30m"),1,0)
| eval Previous30Min=case(_time<relative_time(now(),"-30m") AND _time>=relative_time(now(),"-60m"),1,0)
| eval Last1Day=case(_time>=relative_time(now(),"-1d"),1,0)
| eval Last1Week=case(_time>=relative_time(now(),"-1w"),1,0)
| stats sum(Last*) as Last* sum(Previous*) as Previous*
Hi somesoni,
the above search is giving an error as ->"Error in 'eval' command: The arguments to the 'case' function are invalid."
Hi ajitshukla61116,
let me understand:
all the described counts must be in the same search?
why you don't create an alert for each search?
then, there a relation between searches? e.g. Count of logs ingested in current 30 minutes and Count of logs ingested in previous 30 minutes must be related or are two divided results?
What's your alert condition?
maybe for this reason it's better to have different alerts.
Anyway, if you can create a search for each result it's easy and I think that you don't know any help (If I'm wrong tell me!).
If instead you want to have all the results in a search you could create something like this:
index=my_index earliest=-30m latest=now
| stats count
| eval Time_Period="Current 30 minutes"
| append [ search
index=my_index earliest=-60m latest=-30m
| stats count
| eval Time_Period="Previous 30 minutes"
| fields count Time_Period
]
| append [ search
index=my_index earliest=-d latest=now
| stats count
| eval Time_Period="Last Day"
| fields count Time_Period
]
| append [ search
index=my_index earliest=-w latest=now
| stats count
| eval Time_Period="Last Week"
| fields count Time_Period
]
| table Time_Period count
Bye.
Giuseppe
Hi gcusello ,
Thanks for the reply .
I have configured a search in a similar manner as yours using subsearches and append but i think this search format is taking a longer time or so. due to which my searches are getting skipped and an error of
"The maximum number of concurrent running jobs for this historical scheduled search on this instance has been reached", concurrency_category="historical_scheduled" is coming.
how should i go with it??
Hi ajitshukla61116,
one could easily imagine that the search took a long time due to the large time range, as far as the "concurrent running jobs" are concerned, it means that your search is very structured.
The solution in my opinion should be structured using summary indexes: that is, you should schedule a search every 30 minutes and store the result in a Summary Index (with the collect command) and then for your alert to use the summary index with a search that, to this point, will be very simple and fast.
Bye.
Giuseppe