Lengthy, but I like to give context/background when asking a question. 🙂
Okay... first... I have Splunk running in LightForwarder mode, with web server diabled on my Linux box. I forward to a Mac OS X install that I actually use for my indexing and such. The Linux server is under a reasonable amount of load, and it wouldn't be good to add too much more... thus Lightweight mode for Splunk.
So, I have it monitoring various logs that actually have significant volume... no problem. Very low impact on the system.
I add the Unix app, and suddently my server is hurting badly. My load goes up to 10 or 20, and this system just ISN'T happy. It's primarily a web server, and I have it tuned to not allocate too many processes (i.e. filling up memory). And this seemed to be what was happening when I added the Unix application to Splunk.
Looking at top... splunkd goes from 25 MB or so, to about 95-100 MB. That pushes it outside of my threashold. This isn't that beefy a server (only 1 GB of RAM), but would have no problem if memory wasn't just being taken up.
Now, mind you... NOTHING is enabled for the Unix App, except the app itself and the index (which doesn't even get used because it's forwarding). When I enable some Scripts, that data does get forwarded. But even when all scripts are disabled, that thing is sucking up 10% of my system memory!!!
Is there something that I can do to keep the *nix app running in a bit more "LIGHT WEIGHT" mode?? I'm not sure why so much memory is needed when even none of the scripts are running... nor even IF scripts were running. Is it loaded up all the search stuff into memory?? Should I disable those (they're enabled by default when installed)?
I'd like to have the *nix app running in as light weight a mode as possible on the Linux system. I have it running in full mode on the Mac OS X system (where I use more of the "app" portion, and not just the scripts for input gathering).
To reduce the load, you can also increase the interval of the stats pooling.
cp $SPLUNK_HOME/etc/apps/unix/default/inputs.conf $SPLUNK_HOME/etc/apps/unix/local/inputs.conf to have a list of the default inputs
then edit the intervals $SPLUNK_HOME/etc/apps/unix/local/inputs.conf example: [script://./bin/top.sh] interval = 240 #instead of 60 seconds
It isn't the intervals that's the issue. It's memory consumption of the splunkd when the Unix app is loaded (whether in lightforwarder mode or not). The frequency of script execution isn't the issue. As I've noted in my own semi-answer, I can run the scripts separately, by loading them up, but just not load the Unix app on the forwarder.
Okay. I'm not sure if this is the cleanest way to do it, but I came up with the following solution to my problem, and so far, it seem to be quite acceptable.
I left the Unix app installed, but DISABLED.
I copied the contents of "etc/apps/unix/default/inputs.conf" into my "etc/system/local/inputs.conf", removed the stuff not defining the scripts (I am only interested in the scripts for now), and edited the paths to the *.sh scripts to point to where the unix app's "bin" directory was. I could have just copied the scripts somewhere, but it made sense to me to use them where they were rather than maintain copies.
I then creates an "os" index (while not actually used on the Forwarder, it still wants it to be defined.
I then enabled the select scripts (which were now assigned to "system").
Voila!!! When running this way (no webserver and no Unix app enabled on the forwarder), splunkd is running with a mere 26 MB. Interesting that defining the 18 script definitions actually boosted the footprint by 2 MB... but better than the 65-70 MB of the whole Unix app.
There really ought to be a mode for apps to run in "lightweight" mode, when when running on a lightweight forwarder, it stays as lean as absolutely possible.
My server is now capturing the additional data, and not incurring much additional load due to it.
I hope this helps someone with the same issue, or that wants to capture the data in a lightweight process when running as a SplunkLightForwarder.
If anyone has a cleaner way for this, I would love to hear it. 🙂