I am a newbie splunk user running 4.3.2 on windows (azure).
My setup is to run an indexer/search-head on a VM role (in AWS terms, a vanilla WS2008R2 AMI which has the full splunk distro installed), and run a universal forwarder on each of my web/worker role instances (in AWS terms, each role instance is a VM). An Azure startup script runs the splunk UF MSI using elevated (local admin) permissions on each of the web/worker roles as they get deployed, and I pass it the forwarding server information (indexer-vm-role-name:9997). This all works fine.
When an Azure role comes up, it creates a new local resource directory (this is a directory on the C drive which is named using a newly assigned GUID, so I don't know what it will be ahead of time). I shell-exec (using Process.Start()) "splunk add monitor
My issue is that once I start creating additional files in that directory (which contain JSON-formatted trace information), they don't show up on the indexer/search-head.
Do I need to shell-exec a "splunk add monitor" for each new file I create in this directory? If that's true, what's the use in monitoring a directory? I worry that if I really have to do this, the overhead of my tracing mechanism will be prohibitively expensive for the density of usage (I log EVERYTHING from my app).
Are there any other solutions? I started reading about writing files to the spool directory / sinkhole, but my issue is that the web/worker role processes that generate the JSON traces run in a restricted user account (which is also dynamically created), so I can't write from those processes into any directory that splunk owns.
Turns out I had multiple issues but the main one was that my process hadn't closed the JSON log files yet. So while splunk did actually see them and add them to the monitored inputs, they were write-locked so splunk couldn't read them. That's what a *nix guy gets for running on Windows 😉
Turns out I had multiple issues but the main one was that my process hadn't closed the JSON log files yet. So while splunk did actually see them and add them to the monitored inputs, they were write-locked so splunk couldn't read them. That's what a *nix guy gets for running on Windows 😉