Splunk Search

Discrepancy in events count of metrics index

Poojitha
Communicator

Hi All, 

I need one help. I have created a savedsearch that writes data to metrics index. 

Timerange : -2m to -1m
scheduled to run every 2 mins. 

index=*test sourcetype=aws:test log_processed.dmc.tenantId!=ALL_TENANTS host IN (“test1” , “test2” , “test”3, test4 ,test5) k8.pod_name=“*testpod*” 
| eval _raw=json_extract_exact(_raw,"log_processed.dmc") 
| spath 
| eval container_name='k8.container_name',container_image='k8.container_image', container_hash='k8.container_hash', namespace_name='k8.namespace_name', pod_name='k8.pod_name', pod_id='k8.pod_id', docker_id='k8.docker_id', MetricName = 'log_processed.dmc.metricName' , MetricValue = tonumber('log_processed.dmc.value'), _time = strptime('log_processed.timestamp', "%Y-%m-%d %H:%M:%S.%3N"), metric_name:{MetricName}=MetricValue, tenantId='log_processed.dmc.tenantId'
| table _time host log_type  tenantId namespace_name container_name  container_image container_hash  pod_name  pod_id docker_id  metric_name:* 
| mcollect index=tmetrics

There is also a datamodel with different dataset based on different host filter. Example below :

index=*test sourcetype=aws:test log_processed.dmc.tenantId!=ALL_TENANTS host=test1 k8.pod_name=“*testpod*” | eval _raw=json_extract_exact(_raw,"log_processed.mdc")

index=*test sourcetype=aws:test log_processed.dmc.tenantId!=ALL_TENANTS host=test1 k8.pod_name=“*testpod*” | eval _raw=json_extract_exact(_raw,"log_processed.mdc")

The issue happening is the stats count in new metrics index is not matching the actual dataset or original query stats count .

Based on how savedsearch actually runs, have I written the savedsearch properly ? Sometimes data gets indexed late, keeping that in mind just not to loose any data I have made _time to be _indextime on which scheduled search runs (if my understanding is right here). 

Please help me. 

Regards,
PNV

Labels (3)
Tags (1)
0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

You might also want to consider idempotency when updating metrics indexes. I did a BSides presentation back in '22 about making summary index reports idempotent to avoid duplicate entries, but you might be able to apply the basic technique to updating your metrics indexes. Essentially, when you potentially have previously uncounted events for a time-period you have already counted, you can re-count the events over the period and deduct the count you previously had, and only add the difference as a new record to the index. Hopefully, this video will help make sense of this.
Summary Index Idempotency - Chris Kaye - YouTube

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

If your time range is -2m to -1m and only runs every 2 minutes, you are missing half the times.

Start timeTime coveredTime missed
00:1000:08 - 00:09 
  00:09 - 00:10
00:1200:10 - 00:11 
  00:11 - 00:12
00:1400:12 - 00:13 
0 Karma

livehybrid
SplunkTrust
SplunkTrust

Hi @Poojitha 

If you are running Timerange : -2m to -1m but only scheduled to run every 2 mins then its going to run something like this:

Exec Time: 12:10, earliest: 12:08, latest: 12:09

Exec Time: 12:12, earliest: 12:10, latest: 12:011

Exec Time: 12:14, earliest: 12:12, latest: 12:13

 because you are looking back -2min to -1m. If this really is the way you want to do this then you should do -2m and now as the latest time. 

However, I am wondering if this is the best approach. It might be worth looking at using props/transforms either cloning the sourcetype and using transforms to construct a metric event, or using https://help.splunk.com/en/splunk-enterprise/get-data-in/metrics/9.3/convert-log-data-to-metrics/set... approach.

This would improve performance and latency becuase you wont need to run a search every 2 minutes. If youre able to provide a full event sample I'd be happy to help with this.

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

.conf25 Global Broadcast: Don’t Miss a Moment

Hello Splunkers, .conf25 is only a click away.  Not able to make it to .conf25 in person? No worries, you can ...

Observe and Secure All Apps with Splunk

 Join Us for Our Next Tech Talk: Observe and Secure All Apps with SplunkAs organizations continue to innovate ...

What's New in Splunk Observability - August 2025

What's New We are excited to announce the latest enhancements to Splunk Observability Cloud as well as what is ...