I've an infrastructure where everyone are administrator and therefore can change the universal forwarder configuration. We are not using a deployment server. Is there a way to prevent ingestion in the wrong index at the Forwarder level ?
Schema :
UF (Untrusted conf here) -> Forwarder -> Load balancing between indexer
Does anyone already implemented such control ?
Unless there is some way to do it with Edge Processor (which full capabilities I'm not aware of yet), there's no reliable way to do so.
1. Splunk on its own doesn't store metadata on the connection. So you can't reliably tell which forwarder the data came from.
2. Splunk doesn't differentiate between data locally ingested and received on network inputs.
So you can't do something like "those UFs can only send to specific set of indexes".
You can set up a Heavy Forwarder receiving data from those untrusted UFs and create rulesets (transforms won't catch indexed extractions) globally limiting processing to given indexes (sending others to nullQueue). I think that's the only limit you can impose with such setup.
There's not much you can do to prevent users from entering random index=foo lines into inputs.conf files without locking down the files or overwriting them from the DS. There are some things you can do that might help.
Thank you for your reply. Ingest Actions seem to be the only solution for us even because, since we use the one shot command, the sourcetype is not always the same, just like the host and the source. So it's almost impossible to create a rule that cannot be bypassed. I am now considering creating a new forwarder that would verify that the destination index is indeed audit-XXXX. (The format we expect for our UF)
@richgalloway is spot on.
i have seen that now with Splunk 10 if you are using standalone indexers, you can do post triage. what I mean by that is you can move the logs from one index to the other. This is not a solution, this is just triage. I am sure we don't need to tell you that in the ideal world, you would have more control on the process, but since you don't have that, just be aware that if you know that data was sent to the wrong index you can move it to another index with the new feature in splunk 10. It at this time, does not work on indexer clusters, but a standaone instance will allow you to do this. I am attaching a video of the process. Maybe it will save a little bit of the headache after the event of logs being sent to the wrong place. The youtube video shows how you can move logs from one index to another in splunk 10.
https://youtu.be/QXhStVC-nlc
Hi @Sagittis ,
you could configure on Indexers or (if present) on Heavy Forwarders an index overriding to set the correct index value.
But if everyone con modify the UF conf files, you have a big problem!
I hint to use a Deployment Server (or another deployment system) to be sure that the correct configuration is deployed to the UFs.
Ciao.
Giuseppe
Hello,
We want to allow people to use the oneshot command to ingest logs, sadly I have already considered overwriting the index but, it's not working for us since we have one index by "audit". However, the destination index always has the same pattern (for example, audit-XXXX). I haven't found a way to restrict everything coming in on port 9997 to the audit-XXXX indexes. To be honest, from a security point of view, It seems crazy that it's impossible to set restrictions. The only solution I can see now is to deploy a new forwarder on another host that only allows audit indexes as destinations.
Regards,
Sagittis