There had been discussion on Archival to S3 on this, however the discussion is more with Hunk. And require solution without Hunk.
In the path to coldToFrozenScript.
"coldToFrozenDir = path/to/frozen/archive,"
- I hope we cannot specify AWS S3 Bucket path directly.
Hence there is Python Script is provided in the document
coldToFrozenScript = "$SPLUNKHOME/bin/python" "$SPLUNKHOME/bin/myColdToFrozen.py"
The question is anyone succeeded in Archiving to S3 or Glacier by editing this path?
If yes, how do we see the Archival Size/Trend to AWS S3 Bucket (with 'Index archive bucket')
Thanks in advance.
This is achieved finally by editing indexes.conf & specifying & necessary edit with python
coldToFrozenScript = "/opt/splunk/bin/python" "/opt/SPLUNK_ARCHIVE/coldtofrozens3.py"
The Archival to AWS S3 is successful. Thanks.
When I manually run the coldtofrozens3.py , it is working. But when splunk runs it, it doesn't copy the file to s3. I figured out that splunk has some issues when executing aws cli:
File "/usr/local/bin/aws", line 19, in
ImportError: No module named awscli.clidriver
By any change do you know how this can be resolved, aws installed as root, but splunk is running as user "splunk"
Take a look at the indexes.conf documentation for Splunk 7.0. There's a new feature (unsupported, hopefully out in 7.1?) called remotePath and storageType (look at the very end for an example). Automatically handles S3 and caching data back for searching.