- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Looking to measure heavy sources and track how much is getting indexed per day by source.
the main problem is our Splunk admin team cannot give us access to the _internal index, so i cannot run the standard _internal metrics commands such as:
index=_internal sourcetype=splunkd source=*metrics.log* group=per_source_thruput
Curious as to how accurate measuring actual log sizes with Splunk commands might be compared to _internal index stats. we dont need 100% accurate results just a ballpark estimate such as one source might be indexing 5-600Gbs per day or 1-1.5 Tb a day for example.
Thinking of trying something like
index=aws-index sourcetype=someSource
source="/some/source/file.log"
| eval raw_len=len(_raw)
| eval raw_len_kb = raw_len/1024
| eval raw_len_mb = raw_len/1024/1024
| eval raw_len_gb = raw_len/1024/1024/1024
| eval raw_len_tb = raw_len/1024/1024/1024/1024
| stats sum(raw_len_mb) as MB sum(raw_len_gb) as GB sum(raw_len_tb) as TB by source
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content


That method is close enough, but will be slow since you have to read every event to get its size.
To improve performance ever so slightly, add up the length of _raw then convert to MB/GB/TB at the end.
index=aws-index sourcetype=someSource
source="/some/source/file.log"
| eval raw_len=len(_raw)
| stats sum(raw_len) as B by source
| eval MB = B/1024/1024, eval GB = B/1024/1024/1024, eval TB = B/1024/1024/1024/1024
If this reply helps you, Karma would be appreciated.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content


That method is close enough, but will be slow since you have to read every event to get its size.
To improve performance ever so slightly, add up the length of _raw then convert to MB/GB/TB at the end.
index=aws-index sourcetype=someSource
source="/some/source/file.log"
| eval raw_len=len(_raw)
| stats sum(raw_len) as B by source
| eval MB = B/1024/1024, eval GB = B/1024/1024/1024, eval TB = B/1024/1024/1024/1024
If this reply helps you, Karma would be appreciated.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks, that will most likely help a bit!
planning to run this a few times per day so we can populate results in a .csv lookup table as well
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Why not ask your admin team to setup a summary index for license usage logs and give you access to that summary index. That way you can have access to that data without having access to whole _internal index. Something like this:
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
trying to get our Splunk admin team to do anything here is like pulling teeth 🙂 but summary indexing might work thanks for that. Will probably take them weeks to get to unfortunately
