How does one go about calculating daily index volume by sourcetype?
I'm currently capturing all logged data and sending it to the main index, but moving away from that method, for performance/scalability reasons. We're going through a data classification exercise that will dictate our future state indexes, and as part of that exercise, want to get a better idea of the breakdown of data sizes, and thought sizing by 'sourcetype' would be a good starting point.
thanks for any information!
index=internal source=*metrics.log splunkserver="*" | eval MB=kb/1024 | search group="persourcetypethruput" | timechart span=1d sum(MB) by series
What if i want to know for a specific sourcetype in a specific index?
We have over 50+ indexes but for a couple of indexes, lets say index=a, index=b, index=c, i want to know how much data on a daily basis, my windows system/security/events logs are generating (min,max,avg) for a 30 day range.
Can someone write up a quick search for this? Much appreciated.
Try below, select the time last 30 days in time picker.
index=_internal source=*metrics.log group="per_sourcetype_thruput" [| metadata type=hosts ( index=a OR index=b OR index=c)| table host | format] (series=wineventlog:system OR series=wineventlog:security OR series=wineventlog:application) | eval formatted_time=strftime(_indextime, "%x") | Rename series as sourcetypes | chart sum(kb) over sourcetypes by formatted_time | sort - [ makeresults | addinfo | eval time="\"".strftime(info_max_time-1, "%x")."\"" | return $time] | foreach */* [eval "<<FIELD>>"=round('<<FIELD>>'/1024/1024,2)." GB"]
In the Search app, hit the "status" dropdown and choose index activity. I was able to get index volume by sourcetype / server / at any time span. It was great for a built-in tool.
Running Splunk 4.3.2
This works with Splunk 6.2 for supporting GB by index...
index=_internal source=*metrics.log | eval GB=kb/(1024*1024) | search group="per_sourcetype_thruput" | timechart span=1d sum(GB) by series limit=20
I am new to splunk
I installed trial version which can index 500 MB/day. My question is that Domain controller can index how much events. Please just give me estimate.
Try this query, it will give you size in GB for each day.
index=_internal source=*metrics.log series=wineventlog:security | eval formatted_time=strftime(_time, "%x") | Rename series as sourcetypes | chart sum(kb) over sourcetypes by formatted_time | sort - [ makeresults | addinfo | eval time="\"".strftime(info_max_time-1, "%x")."\"" | return $time] | foreach */* [eval "<<FIELD>>"=round('<<FIELD>>'/1024/1024,2)." GB"]