In the past, one of my coworkers was working on a whitelist/blacklist solution for our Windows logs (dropping certain EventCodes and keeping others, etc.) Now, that task has fallen to me.
I'd like to test this on a distilled version of our log data for ease of verifying the results, but I'm not sure how to go about that. I've got a file with copies of our Windows Logs, would it be enough for me to point a Splunk instance to them for indexing? Or do I need to push them through a Windows instance?
You can create a non-production Splunk environment on another PC using the free/demo license to test the new configurations. Once it's working you can then apply the changes to your primary or production Splunk server.
With this non-production Splunk instance, you're free to stop
, start
and clean eventdata
at any time, and as many times as you want without affecting the production server.
Hope this helps.
You'd mentioned that you have "a file with copies of [your] Windows Logs." Can't you just import that file to the new Splunk instance to test your white/black-lists? If you made a mistake you can use the "Splunk stop; splunk clean eventdata; splunk start" combination to re-index and re-test the same log file. Since this is on a separate server, you can do stop/clean/start combination as many times as you want until you have perfected the white/black list.
Any supported Windows computer can generate and accept Windows logs for import into Splunk. You can't, however, use a Splunk instance on any sort of Unix for this.
That doesn't address the question of how to handle the windows logs. Is it possible to generate Windows logs? Or is it possible to take the existing file of logs and shove it to splunk for the same processing Windows logs get?