Our requirements are to have readily searchable data for 12 months and 'cold store' of data for an additional 18 mths (30 mths total). Ingest Actions seems like the obvious choice since it can write to an S3 bucket and compress the data in a format easily re-ingested or passed to a 3rd party if needed. However, the ingest actions seem to only work given you apply the ruleset to a sourcetype. Given that there may be a hundred or more sourcetypes, this is a little onerous. Is there a method to accomplish this w/o creating a ruleset for every sourcetype?
Use a coldToFrozenScript for each index you wish to archive to S3. Splunk will invoke the script for each bucket that reaches the end of its searchable lifetime. Details are in the Admin Manual.
That script, I believe, moves frozen buckets to an S3 storage location. However, I what I want is to use the ingest actions (Settings / Data / Ingest actions) to write all raw events in compressed JSON format. Ingest actions is per source type and with nearly 100 or more source types, manually creating one for each is onerous.
Ingest actions use RULESET in the props to define how to store on the Remote File System (RFS).
Hi
you should remember that with IA you could use S3 only if you are running that instance on AWS!
Splunk have "secret" wildcard for sourcetype in props.conf. I don't know if it works also for IA RULESET or not and can you use it via GUI or not. You can read more from https://www.splunk.com/en_us/blog/tips-and-tricks/quick-tip-wildcard-sourcetypes-in-props-conf.html. BUT remember that this is not officially supported and it can removed anytime!!
For that reason I also prefer @richgalloway 's proposal to use cold2frozen script.
r. Ismo