Getting Data In

Is there an option to tell Splunk to use round robin buffers for all log files coming in (ex: only store data for a certain time or size)?

hunterbr
Engager

Hi,

I am a Splunk newbie. I have setup Splunk in a Lab enviroment with limited resources on an ESXi server (max. 100GB virtual HD). I am wondering if there is (if not default) an option to tell Splunk to use round robin buffers for all data coming in ( syslogs and Sourcefire eStreamer data). E.g. store only data for 30 days and overwrite old data if buffersize is reached. Is there an option to do that and or is this recommended or has any non obvious side effects ? The environment is mainly build to play with Splunk and start learning it. The goal is to make sure the VM does not crash, not to keep all logging.

tia,
Holger

0 Karma
1 Solution

acharlieh
Influencer

After ingesting your logs, Splunk stores thes logs in indexes. The default index is main, but often times people create their own since it's at this level where you can define access controls and (applicable here) a retention policy. You might be interested in this doc: http://docs.splunk.com/Documentation/Splunk/6.2.2/Indexer/Setaretirementandarchivingpolicy

For each index, you can tell Splunk to freeze (by default this means delete, however it could be configured to mean to archive to long term storage) data that is too old or when the index contains too much data. By default these are very large (7 years 500GB if I remember correctly) but they are likely the parameters you're looking to set in indexes.conf

View solution in original post

acharlieh
Influencer

After ingesting your logs, Splunk stores thes logs in indexes. The default index is main, but often times people create their own since it's at this level where you can define access controls and (applicable here) a retention policy. You might be interested in this doc: http://docs.splunk.com/Documentation/Splunk/6.2.2/Indexer/Setaretirementandarchivingpolicy

For each index, you can tell Splunk to freeze (by default this means delete, however it could be configured to mean to archive to long term storage) data that is too old or when the index contains too much data. By default these are very large (7 years 500GB if I remember correctly) but they are likely the parameters you're looking to set in indexes.conf

hunterbr
Engager

thx that was what I was looking for !

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Tech Talk Recap | Mastering Threat Hunting

Mastering Threat HuntingDive into the world of threat hunting, exploring the key differences between ...

Observability for AI Applications: Troubleshooting Latency

If you’re working with proprietary company data, you’re probably going to have a locally hosted LLM or many ...

Splunk AI Assistant for SPL vs. ChatGPT: Which One is Better?

In the age of AI, every tool promises to make our lives easier. From summarizing content to writing code, ...