Splunk Search

How to get the sourcetype size in index.

splunkatl
Path Finder

How do I know sourcetype size in index on particular day of last month.
we need to know how much of data reduced after we configured log filtering (ie data consumed any day before 11/16 AND any day after 11/16.

I have checked in Splunk deployment monitor but did not see search defined on Sourcetype.

Tags (2)
1 Solution

jbsplunk
Splunk Employee
Splunk Employee

You can find that search here:

http://wiki.splunk.com/Community:TroubleshootingIndexedDataVolume

Counting event sizes over a time range

Roughly, you can run a search where you look at all (or some) data over a range of indexed_time values, counting up the size of the actual events.

For example, where the endpoints START_TIME and END_TIME are numbers in seconds from the start of unix epoch, the search would be

indexed_time>START_TIME indexed_time<END_TIME |eval event_size=len(_raw) | stats sum(event_size)

This is a slow and expensive search, but when you really need to know, can be valuable. It must be run across a time range that can contain all possible events that were indexed at that time -- meaning regardless of timestamp regularity. Typically this means it must be run over all time. The stats computationg as well as initial filters can of course be adjusted to look at the problem more closely.

View solution in original post

jbsplunk
Splunk Employee
Splunk Employee

You can find that search here:

http://wiki.splunk.com/Community:TroubleshootingIndexedDataVolume

Counting event sizes over a time range

Roughly, you can run a search where you look at all (or some) data over a range of indexed_time values, counting up the size of the actual events.

For example, where the endpoints START_TIME and END_TIME are numbers in seconds from the start of unix epoch, the search would be

indexed_time>START_TIME indexed_time<END_TIME |eval event_size=len(_raw) | stats sum(event_size)

This is a slow and expensive search, but when you really need to know, can be valuable. It must be run across a time range that can contain all possible events that were indexed at that time -- meaning regardless of timestamp regularity. Typically this means it must be run over all time. The stats computationg as well as initial filters can of course be adjusted to look at the problem more closely.

Get Updates on the Splunk Community!

[Puzzles] Solve, Learn, Repeat: Dynamic formatting from XML events

This challenge was first posted on Slack #puzzles channelFor a previous puzzle, I needed a set of fixed-length ...

Enter the Agentic Era with Splunk AI Assistant for SPL 1.4

  &#x1f680; Your data just got a serious AI upgrade — are you ready? Say hello to the Agentic Era with the ...

Stronger Security with Federated Search for S3, GCP SQL & Australian Threat ...

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...