Deployment Architecture

Can I schedule rolling warmdb to colddb?

richnavis
Contributor

Recently, I discovered that although the Splunk documentation indicates that colddb can be on slower storage, doing so has a performance impact on normal indexing processes because of the need to roll data from warm to colddbs. In my environment, I was able to reduce index blocking by putting colddbs on faster disk. My question, then is can rolling from warm to cold be done on a schedule, say, nightly, instead of dynamically. So, ideally, instead of constantly rolling data to cold, we would roll the oldest FULL Days data to cold, in a first in/first out manner.

Possible? Anyone doing this?

Tags (2)
0 Karma

kristian_kolb
Ultra Champion

Well, you could do it like this (and I'm not saying that this best practice, or even good at all);

  • Set your bucket size to something so large that the hot buckets don't roll to warm during the daytime.

  • Have a cronjob for restarting splunkd at midnight (or whenever it suits you).

  • The restart turn the hot buckets into warm, and the oldest warm gets rolled to cold.

Not sure this is helpful at all,

/Kristian

0 Karma

kristian_kolb
Ultra Champion

Not really that I know of, apart from the backup issue. There is probably a reason why auto high-volume is set to 10 GB and not more, but it might not be purely related to performance degradation of larger buckets. Then again, I'm just guessing here.

/k

0 Karma

richnavis
Contributor

interesting idea... any downside to having large HOT buckets that you know of?

0 Karma

araitz
Splunk Employee
Splunk Employee

There is not a supported way of doing this, but you could use a cron script and Splunk's REST API to trigger a bucket roll. Some customers do this so that they have buckets that are for the most part 24 hours each, which can make backup and restore easier.

I wouldn't recommend doing this though - the small gain you get in performance, if any, will not outgain the risk and managment complexity involved in orchestrating this process.

0 Karma

kristian_kolb
Ultra Champion

Agree with you there 🙂

/k

0 Karma

araitz
Splunk Employee
Splunk Employee

Moving colddb to local seems like the solution to performance issues, not orchestrating bucket rolls 🙂

0 Karma

richnavis
Contributor

WE did find that there is quite alot of blocking in our current setup (1500/day)... moving colddb local improved this to just a few 5-10 blocks per day.. so there IS quite a gain in performance to be had by addressing this problem..

0 Karma
Get Updates on the Splunk Community!

Splunk App for Anomaly Detection End of Life Announcment

Q: What is happening to the Splunk App for Anomaly Detection?A: Splunk is officially announcing the ...

Aligning Observability Costs with Business Value: Practical Strategies

 Join us for an engaging Tech Talk on Aligning Observability Costs with Business Value: Practical ...

Mastering Data Pipelines: Unlocking Value with Splunk

 In today's AI-driven world, organizations must balance the challenges of managing the explosion of data with ...