I am working on a proof of concept but I am failing to see where security comes in regarding forwarders and receivers. I installed and configured a universal forwarder on a windows host. I configured a receiver on a Splunk Enterprise on premise server.
I can't see any security on this at all.
If a receiver is open, any host that can route to the enterprise server can just stream junk at it? How is this traffic filtered or authenticated? Control of which index the data is dumped to seems to be in forwarder configuration, so the server seems to not have any control of how that data gets routed?
Does this mean any user who can reach my Splunk Enterprise server can spam any index it wants to without any form of authentication?
How is this protected under best practices?
I am confused.
There are two (possibly more I can't think of ATM) ways to do that.
The first is to enable SSL. This way, only forwarders with the right certificate can send data. Do it using a [splunktcp-ssl:9997] stanza.
The second is to specify IP addresses from which to accept data. That's done using the acceptFrom setting in inputs.conf.
See inputs.conf.spec for more information on these settings.
Depends on your design pattern.
1. From my experience, we started with "Wildcards" in serverclass at inputs, but this proved to be pain in long run as "people" just install UF's in test machines, unsecure systems etc in large estate. So the easiest way to prevent is to ensure your "serverclass" contains the list of hosts specifically. I know it is hard, but you can automate this from cmdb or your "systems scope", thus ensuring ONLY relevant servers are collected and rest are discarded (let me know if you need the automation script to create serverclass from a given list of hosts/cmdb)
2. the second port of call is to ensure TLS certs are verified. Remember, you will have tons of error messages for the non-verified ones into your _internal index. So always ensure the "serverclass" fix is done before making TLS strict. and then you need to work with SME to reinstall the agent
3. Package apps into modular apps for the Universal Forwarder. So you as Splunk administrator should give automation package to install Splunk UF with 1 basic app (a deployment_server app). This will ensure when UF is installed it calls the deployment_server and then it downloads all "relevant apps" meant for that client. In the "relevant apps", ensure you have a certs app (eg: `my_cert_app`) with relevant certs will be pushed to the "in-scope" client. This way, you will ensure the servers with UF are strictly controlled and always receive the latest certs app.
There are two (possibly more I can't think of ATM) ways to do that.
The first is to enable SSL. This way, only forwarders with the right certificate can send data. Do it using a [splunktcp-ssl:9997] stanza.
The second is to specify IP addresses from which to accept data. That's done using the acceptFrom setting in inputs.conf.
See inputs.conf.spec for more information on these settings.
Hi
As @richgalloway said there are couple of ways to do distributed splunk environment more secure. You should read https://docs.splunk.com/Documentation/Splunk/8.2.0/Security/WhatyoucansecurewithSplunk
There are own chapters for ssl certs and how to secure and authenticate between forwarders and indexers.
Also you can force indexes when using HEC based on tokens and also you can do the same with props.conf and transforms.conf for normal traffic from forwarders if you really think that it's needed.
r. Ismo