Splunk Dev

How do you get start and end time of an event count?

siddharthmis
Explorer

Hi,

I have an event such as "DB connection failed" in db_logs sourcetype.

I would like to get the start and end time between which the count of occurrences of "DB connection failed" exceeds 100.

So, something like-

Start Time  End Time    Count
9/10/2018 14:20 9/10/2018 14:58 159
9/10/2018 12:56 9/10/2018 12:58 101
9/10/2018 10:40 9/10/2018 11:10 111
Tags (1)
0 Karma

DalJeanis
Legend

Assuming that you mean "count up all minutes that have some connection failed, stop counting when there is a minute when no connections fail, and if there are more than 100 in the time that it was failing, give me those groups of events...."

You could try something like this...

 your search that finds the failed events
| timechart span=1m count as reccount
| streamstats prev(reccount) as prevcount 
| where reccount>0
| streamstats count(eval(case(isnull(reccount),1,reccount==0,1))) as groupno
| stats min(_time) as mintime max(_time) as maxtime sum(reccount) as reccount by groupno 
| where reccount >=100

...and then format your times as desired.

0 Karma

siddharthmis
Explorer

Thanks but prev(reccount) does not look like a valid argument

0 Karma

renjith_nair
Legend

@DalJeanis meant "last(reccount)". Replace prev with last

Happy Splunking!
0 Karma

renjith_nair
Legend

@siddharthmis, how do you split the block of events which has this errors? for e.g. if its based on time , say 100 in 1 hour?

Happy Splunking!
0 Karma

siddharthmis
Explorer

I was using timechart actually

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...