Getting Data In

How can I determine my indexing volume not by host, source, or sourcetype?

thepocketwade
Path Finder

I'm trying to determine what percentage of my daily indexing volume is made up of a specific group of logs. For example, I want to know the volume of all the logs with EventCode = 4720.

I know how to find total log volume over a given time period, but I haven't been able to get these strategies to work with the EventCode specification. Can I make this work?

Paolo_Prigione
Builder

I don't think Splunk can trace custom stats on log by default, but you could use a workaround: just aggregate the total length of the logs matching the eventcode. This would yeald a good approximation of the intended result.

Something like:

EventCode = 4720 | eval size=len(_raw) | stats sum(size) as totSize

As this might take a long time, you could store the results in the summary index: use a scheduled search (the example is hourly)

EventCode=4720 earliest=-1h@h latest=@h | eval size=len(_raw) | sistats sum(size) | collect index=<sumindex> marker="EventCode=4720"

then retrieve the data as:

index=<sumindex> EventCode=4720 | stats sum(size)

Lowell
Super Champion

Agreed. I think this is probably the only option. I would suggest that if you are going to sum up the bytes, you may as well also store a count of the events; to give you values more comparable to the builtin metrics. You could do something like: | stats eval(round(sum(size)/1024,2)) as total_kb, count as events you could from there calculate average rates too since you know the time span.

0 Karma
Get Updates on the Splunk Community!

Data Management Digest – December 2025

Welcome to the December edition of Data Management Digest! As we continue our journey of data innovation, the ...

Index This | What is broken 80% of the time by February?

December 2025 Edition   Hayyy Splunk Education Enthusiasts and the Eternally Curious!    We’re back with this ...

Unlock Faster Time-to-Value on Edge and Ingest Processor with New SPL2 Pipeline ...

Hello Splunk Community,   We're thrilled to share an exciting update that will help you manage your data more ...