I have a splunk forwarder on a Linux Websphere machine. I have an archival script which triggers daily at a designated time and it will take care of rotating the splunk log and it will archive (.gz) it . Since I have set up a real time forwarding of the data, splunk process will be reading the log file all the time and when this archival script triggers and tries to gzip the file, it is creating a RACE AROUND Condition causing the splunk process to stop abruptly. I need to manually start the splunk process later.
Please find the log snippet below:
06-18-2015 01:00:55.194 -0700 INFO WatchedFile - Will begin reading at offset=35869164 for file='/logs/spswbsvc/spssplunklog.log.201506180100'.
06-18-2015 01:00:56.154 -0700 INFO WatchedFile - Logfile truncated while open, original pathname file='/logs/spswbsvc/spssplunklog.log', will begin reading from start.
06-18-2015 01:00:57.172 -0700 WARN FileClassifierManager - Unable to open '/logs/spswbsvc/spssplunklog.log.201506180100'.
06-18-2015 01:00:57.172 -0700 WARN FileClassifierManager - The file '/logs/spswbsvc/spssplunklog.log.201506180100' is invalid. Reason: cannot_read
06-18-2015 01:00:57.192 -0700 INFO TailingProcessor - Ignoring file '/logs/spswbsvc/spssplunklog.log.201506180100' due to: cannot_read
06-18-2015 01:00:57.193 -0700 ERROR WatchedFile - About to assert due to: destroying state while still cached: state=0x0x7f9b71f4d0c0 wtf=0x0x7f9b71c7fc00 off=0 initcrc=0xb8098a8b758746ea scrc=0x0 fallbackcrc=0x0 last_eof_time=1434614455 reschedule_target=0 is_cached=343536 fd_valid=true exists=true last_char_newline=true on_block_boundary=true only_notified_once=false was_replaced=true eof_seconds=3 unowned=false always_read=false was_too_new=false is_batch=true name="/logs/spswbsvc/spssplunklog.log.201506180100"
Is there any solution where in we can make splunk read the log file while it is getting gzipped ?
Can you change the archival script to put zipped files in a directory Splunk is not monitoring? Then Splunk won't try to read them, which you wouldn't want it to do since the data should already be indexed.
Thats right, it will be difficult to move the zipped file to a different directory since we dont want to miss any data. Also this might mess up the inputs.conf file which we have in the forwarder.