Splunk Search

Query HDD space, index and data

rene847
Path Finder

Hi,
I have not been able to find a good query with all my trying.... I need help please!

Can anyone tell how I can:
I would like to get a query that would give me the disk space since December 1, 2014 for each month until today (to see the progression).
Splunk has sent me alert each day (email) about the status of our indexes (view and control our licenses). I use the same alert with these values "-198d@d to now" and it doesn't work, I only view 1 month of data and I don't know why.

Here is my normal query:
index="_internal" source="*metrics.log" per_index_thruput | eval GB=kb/(1024*1024) | stats sum(GB) as total by series date_mday | sort total | fields + date_mday,series,total | reverse

I'm looking for the same query (or similar) for:
- all space
- all the indexes
- all data entries
==> number of entry since December 1, 2014 until today for each month.

Is it possible? Do you have an query idea?

Thank you
Regards

Tags (4)
0 Karma
1 Solution

MuS
Legend

Hi rene847,

all Splunk internal indexes like _internal and _introspection have a default retention of 30 days. You can check it like this:

splunk cmd btool indexes list --debug _internal | grep frozenTimePeriodInSecs
/opt/splunk/etc/system/default/indexes.conf frozenTimePeriodInSecs = 2592000

Change it in indexes.conf but be aware of the increasing disk space needs. Also you may need to adapt the maxTotalDataSizeMB option which is by default at 500000.

Hope that helps ...

cheers, MuS

View solution in original post

rene847
Path Finder

Thank you Martin_Mueller and MuS for your answer, I appreciate your support.

0 Karma

MuS
Legend

Hi rene847,

all Splunk internal indexes like _internal and _introspection have a default retention of 30 days. You can check it like this:

splunk cmd btool indexes list --debug _internal | grep frozenTimePeriodInSecs
/opt/splunk/etc/system/default/indexes.conf frozenTimePeriodInSecs = 2592000

Change it in indexes.conf but be aware of the increasing disk space needs. Also you may need to adapt the maxTotalDataSizeMB option which is by default at 500000.

Hope that helps ...

cheers, MuS

martin_mueller
SplunkTrust
SplunkTrust

As per default settings, Splunk only retains thirty days of data in _internal. You have two options to change that for the future.

The easy way out: Increase the retention time for the index. You'll need a lot more disk space, but it's a simple change and you'll have all the data available.

The efficient way: Set up summary indexing that for example runs daily and grabs yesterday's data, calculates a daily report and stores that in a summary index with long retention time. When you want to run an overall report you simply run it on the pre-aggregated data in the summary index for a fast result with minimal space used. It's a little more effort than the easy way out, and you can't add more data to your summary retroactively just like you can't retroactively increase the retention time and regain old data that was deleted...

martin_mueller
SplunkTrust
SplunkTrust

Your mornings are all wrong 😛

0 Karma

MuS
Legend

No, not at all - I'm in a time machine. Currently I'm writing from your future 🙂

0 Karma

MuS
Legend

.... I'm too slow in the morning 🙂

martin_mueller
SplunkTrust
SplunkTrust

Did you take a look at the reports included in the Distributed Management Console?

0 Karma

rene847
Path Finder

Yes, that way as possible and I tried.
However, I can not take a value greater than 30 days otherwise I have no data?
Do you have an idea?

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...