Hi guys,
Just a few quick questions about getting Splunk server data into splunk!
Our splunk environment collects a large amount of security data from thousands of sources, yet, we don't collect any security data from the Splunk servers themselves (they run on Redhat linux OS). I was thinking of adding all of our servers (Cluster master, license master, deployer etc) to our deployment server and create a server class with the the *nix TA to ingest the relevant host data we want. Is this the best solution or does anyone have any better ideas on how to do it?
Also, can the deployment server be a client of itself? How do we get data from it to our indexer cluster if not?
Is the indexer cluster okay with forwarding data to itself?
Any help would be appreciated.
Cheers!
The indexers (and heavy forwarders) wont need to forward data to themselves.
Any applications you install on an indexer/hf will be picked up without having to specify anything in outputs.conf (unless you want to send them to a specific indexer for example) - Just make sure you set your indexes in inputs.conf.
Deployment servers, Search Heads, and CMs can all be configured to forward events just like any other UF.
(But, you cant use a deployment server to manage a deployment server - so you will have to configure that one locally 🙂 but otherwise, yes, you can push out a configured *nix TA app to collect your interesting data.
(Beware of enabling ALL the scripted inputs, as they can be a bit intensive.)
The indexers (and heavy forwarders) wont need to forward data to themselves.
Any applications you install on an indexer/hf will be picked up without having to specify anything in outputs.conf (unless you want to send them to a specific indexer for example) - Just make sure you set your indexes in inputs.conf.
Deployment servers, Search Heads, and CMs can all be configured to forward events just like any other UF.
(But, you cant use a deployment server to manage a deployment server - so you will have to configure that one locally 🙂 but otherwise, yes, you can push out a configured *nix TA app to collect your interesting data.
(Beware of enabling ALL the scripted inputs, as they can be a bit intensive.)
Yeah that's more or less what I was thinking! Thanks for the reassurance.
A deployment server Splunk instance cannot be a deployment client of itself, but you should be able to install a UF (=a different Splunk instance) on your deployment server to collect local log files and manage that UF with DS.
@ssievert - what is your opinion of having the deployment server have the needed apps deployed to it without it actually being a deployment client (rather than a UF installed in parallel)? This might be implemented with symlinks, rsynch, or just a duplicate copy of the app from the deployment-apps
folder to the deployment server's apps
folder.
Thoughts? Any technical challenges? Or is it just easier to maintain by having the UF installed, hence that suggestion.
That should be fine.
I wonder how common of an approach this is?
As it happens I have come across just this very scenario today, but I cant see a significant advantage (other than as you note above) as it adds complexity (speaking from someone picking apart an undocumented environment) and increases the number of wtf's per hour before i realised that /opt/splunkforwarder was also on the box. 🙂