As with Splunk, a million ways to do it so something basic like the following: index=<your_index> sourcetype=<your_sourcetype> source=<your_datasource> earliest=-15m (or another time range)
| stats sum(<your_volume_field> such as bytes_sent) as data_volume
| eval threshold=1000000 # Set desired threshold for data volume
| eval is_reduced_volume = if(data_volume < threshold, "Yes", "No") Can get a little more deeper with ML: index=<your_index> sourcetype=<your_sourcetype> source=<your_datasource> earliest=-15m
| timechart span=1m sum(<your_volume_field>) as data_volume
| fit MLTK_AnomalyScore(data_volume) as anomaly_score
| eval threshold=0.9 # Set desired anomaly score threshold
| eval is_anomaly = if(anomaly_score > threshold, "Yes", "No") And Lantern may have something they can leverage as well: https://lantern.splunk.com/Splunk_Platform/Use_Cases/Use_Cases_Security/Forensics/Crea[…]work_activity/Hosts_logging_more_or_less_data_than_expected
... View more