Splunk Search

Grouping Events by Both Time and Customer

DGray
Engager

Hi all,

Want to alert when a customer's usage suddenly drops.

Tried breaking recent usage into two time periods:
- "new" events (the previous 10 minutes)
- "old" events (the 10 minutes before that)

If there are 100 more "old" events than "new" events, I want to raise an alert.
I have tried several approaches, and found the problem unexpectedly tricky. Please help.

note: All events have a "customer" field, which is one of a couple hundred values. It should be possible to have one query that checks all customers, and returns those that have problems.

sourcetype=web | eval kind = case(_time>now()-600, "new", _time>now()-1200, "old", true(), "out of scope") | stats count by customer, kind | .... something??

sourcetype=web | stats count by customer | eval new_event_count = [search sourcetype=web earliest=-10m | stats count | where customer=customer ??? | return $count] |

0 Karma
1 Solution

somesoni2
Revered Legend

Try like this

sourcetype=web earliest=-20m@m 
| eval period=if(_time>=relative_time(now(),"-10m@m"),"new","old") 
| chart count over customer by period
| where old-new>100

View solution in original post

somesoni2
Revered Legend

Try like this

sourcetype=web earliest=-20m@m 
| eval period=if(_time>=relative_time(now(),"-10m@m"),"new","old") 
| chart count over customer by period
| where old-new>100

DGray
Engager

Thanks, this works great!

0 Karma
Get Updates on the Splunk Community!

Splunk Observability for AI

Don’t miss out on an exciting Tech Talk on Splunk Observability for AI!Discover how Splunk’s agentic AI ...

Splunk Enterprise Security 8.x: The Essential Upgrade for Threat Detection, ...

Watch On Demand the Tech Talk on November 6 at 11AM PT, and empower your SOC to reach new heights! Duration: ...

Splunk Observability as Code: From Zero to Dashboard

For the details on what Self-Service Observability and Observability as Code is, we have some awesome content ...