We have a splunk enterprise installation where everything is on the same server/install (searchhead etc.).
At the moment we have a script that shuts down the splunk services and then zip's the whole /opt/splunk/ folder and copies it to a NAS.
Problem is that this action takes about 1,5h and during that time we won't be able to reach splunk (since the service is shutdown).
Would it be possible to do this "on the fly" instead of shutting down the service and just do the zip of the entire folder when it is "alive".
My thinking is that this won't be optimal since then bucket files will be "open" etc.
But what are your take on this? maybe another better solution?
Splunk has a manual for that. See https://docs.splunk.com/Documentation/Splunk/9.1.0/Indexer/Backupindexeddata In a nutshell, hot data is rolled to warm then all data (except new hot buckets) are backed up while Splunk remains up. Yes, new data is missed by the backup, but it will be backed up next time.
There's a good discussion on the topic at https://community.splunk.com/t5/Deployment-Architecture/How-to-back-up-hot-buckets/m-p/104780