Deployment Architecture

syslog - main db - archive on daily basis

hellou
New Member

Greeting,
My Splunk installation is simply configured to collect syslog messages (udp 514) and nothing fancy... and I would like to create a copy of every event at a 24 hours interval, how do I do that?

The closest I can figure out to accomplish this is to mark the information as "cold"

[main]
coldPath = /opt/splunk-archive
frozenTimePeriodInSecs = 86400  

    {{By the way... this isn't working. After 24 hours I don't see my data... and yes, I restart the service}}

Is there a better way to do this? I am not too comfortable freezing the information like this but will if I cannot figure out a better way... which would be to simply look at all the events in the last 24 hours and create a zip file of the data (perhaps via a script.)

What is optimal way to do this?
Thanks in advance,
~Jaga

Tags (2)
0 Karma
1 Solution

dwaddle
SplunkTrust
SplunkTrust

I'm not sure that's what you want to do. the frozenTimePeriodInSecs governs when data is moved to FROZEN not cold. Unless you have also configured a coldToFrozenScript this means you've told Splunk to DELETE any index buckets where the newest event is more than 24 hours old.

If your goal is to keep the data both inside and outside of splunk, maybe you'd be better off to let rsyslog or syslog-ng listen on udp/514 and then let Splunk read their flat files (which you then keep)

View solution in original post

dwaddle
SplunkTrust
SplunkTrust

I'm not sure that's what you want to do. the frozenTimePeriodInSecs governs when data is moved to FROZEN not cold. Unless you have also configured a coldToFrozenScript this means you've told Splunk to DELETE any index buckets where the newest event is more than 24 hours old.

If your goal is to keep the data both inside and outside of splunk, maybe you'd be better off to let rsyslog or syslog-ng listen on udp/514 and then let Splunk read their flat files (which you then keep)

dwaddle
SplunkTrust
SplunkTrust

excellent. Be sure to click the "accept answer" checkbox so it'll show as answered. Also, you might find your search to work a little easier if you use "earliest=-1d@d latest=@d" -- this is a relative indicator for "yesterday" without having to specifically figure work it out.

0 Karma

hellou
New Member

Thank you for the response! I do see your point and will make the adjustment to not have my splunk data delete prematurely.

In the end my answer was to write a script. My script looks at all the events within a day and pipe it to a file. I then compress that file, move it off the server, and archive it.

The script basically runs this command:
./splunk search 'earliest=7/25/2011 latest=7/26/2011' -maxout 0 -auth $username:$password > /some/location

the trickiest part was to figureout the "-maxout" option because without it the search would only return 10000 items.

I tried to do this with python but could not figure out the "-maxout" equivalent (I think it's
maxresults=0 but that didn't work) also I needed to figure out the "earliest" and "latest" equivalent to establish the correct range.

In any case the command line above works fine and I'll revisit this and do it in python.

thanks again,
~Jaga

0 Karma
Get Updates on the Splunk Community!

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...