I have a folder with some .evtx files from another machine that I need to get forwarded and indexed into splunk. The logs were created on a windows 7 machine and the machine I am attempting to import from now is a windows 10 machine. The original machine is currently running splunk forwarder 6.6.5 though I have tried 6.4.10 (this was the last version that explicitly claimed to support win 7) all the way through 7.0.3 (which is what the Windows 10 machine is running) while trying to get this file import to function. I put these files in a folder called reimport and added a monitor stanza to my inputs.conf file. The issue is that when splunk tries to process them, I get the following error in the splunkd log:
ERROR WinEventLogChannel - saveCheckpointStr: Failed to rename checkpoint file 'C:\Program Files\SplunkUniversalForwarder\var\lib\splunk\persistentstorage\WinEventLog\C__Users_BLA_Desktop_Reimport_Archive-Security-2018-03-06-14-19-40-210_evtx_checkpoint.tmp' -> 'C:\Program Files\SplunkUniversalForwarder\var\lib\splunk\persistentstorage\WinEventLog\C__Users_BLA_Desktop_Reimport_Archive-Security-2018-03-06-14-19-40-210_evtx_checkpoint': Access is denied.
The forwarder is running as a service and system has full rights on all of this. The files are not in use by windows (i.e. not open anywhere else) as I copied them over from another machine manually and dropped them on the desktop. The inputs.conf stanza looks like this:
[monitor://C:\Users\BLA\Desktop\Reimport] index = MyIndex disabled = 0
Any ideas what may be causing this? Or alternative methods for getting the logs into splunk and associated with the original machine?
We had the same issue.. Found out we had two inputs.conf files, on in the forwarder app, one in a TA, both trying to monitor the same path. One instance would lock it form the other. Rem'ed out one, and no more errors, and the data started flowing.
Perhaps this is a bug? The error has resolved itself after I changed the monitor stanza to "monitornohandle" (tho i switched to "batch" after i learned that monitornohandle only reads new data written to file and not the data already present) and then went back to monitor. Upon returning to monitor, i received no more errors like this (it may have been the restarts of splunk service between changes rather than the actual config changes). I thought originally it was still broken though our splunk admins later found the data in splunk but under the FQDN of the host computer rather than the host name defined in inputs.conf (i.e. data was under computername.domain instead of just computername as defined in host name and where the rest of the data lives). I'm just gonna chalk this up to bug for my own sanity 🙂