Getting Data In

UF : Prevent ingestion in a wrong index

Sagittis
Explorer

I've an infrastructure where everyone are administrator and therefore can change the universal forwarder configuration. We are not using a deployment server. Is there a way to prevent ingestion in the wrong index at the Forwarder level ?

Schema : 

UF (Untrusted conf here) -> Forwarder -> Load balancing between indexer

 

Does anyone already implemented such control ?

Labels (2)
0 Karma

PickleRick
SplunkTrust
SplunkTrust

Unless there is some way to do it with Edge Processor (which full capabilities I'm not aware of yet), there's no reliable way to do so.

1. Splunk on its own doesn't store metadata on the connection. So you can't reliably tell which forwarder the data came from.

2. Splunk doesn't differentiate between data locally ingested and received on network inputs.

So you can't do something like "those UFs can only send to specific set of indexes".

You can set up a Heavy Forwarder receiving data from those untrusted UFs and create rulesets (transforms won't catch indexed extractions) globally limiting processing to given indexes (sending others to nullQueue). I think that's the only limit you can impose with such setup.

0 Karma

richgalloway
SplunkTrust
SplunkTrust

There's not much you can do to prevent users from entering random index=foo lines into inputs.conf files without locking down the files or overwriting them from the DS.  There are some things you can do that might help.

  1. Define a "last chance" index.  Inputs with an index that doesn't exist will send their data here.  You can set up alerts to notify you when data arrives here so the input can be corrected.
  2. Use Ingest Actions in the intermediate forwarders or indexers to redirect data to the correct index, if it can be determined (by sourcetype, perhaps).
  3. Use data from the last chance index to create a "Hall of Shame" calling out those who do not follow the proper onboarding procedures.
---
If this reply helps you, Karma would be appreciated.

Sagittis
Explorer

Thank you for your reply. Ingest Actions seem to be the only solution for us even because, since we use the one shot command, the sourcetype is not always the same, just like the host and the source. So it's almost impossible to create a rule that cannot be bypassed. I am now considering creating a new forwarder that would verify that the destination index is indeed audit-XXXX. (The format we expect for our UF)

0 Karma

LAME-Creations
SplunkTrust
SplunkTrust

@richgalloway is spot on.  

i have seen that now with Splunk 10 if you are using standalone indexers, you can do post triage.  what I mean by that is you can move the logs from one index to the other.  This is not a solution, this is just triage. I am sure we don't need to tell you that in the ideal world, you would have more control on the process, but since you don't have that, just be aware that if you know that data was sent to the wrong index you can move it to another index with the new feature in splunk 10.  It at this time, does not work on indexer clusters, but a standaone instance will allow you to do this.  I am attaching a video of the process.  Maybe it will save a little bit of the headache after the event of logs being sent to the wrong place.  The youtube video shows how you can move logs from one index to another in splunk 10.  

https://youtu.be/QXhStVC-nlc

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @Sagittis ,

you could configure on Indexers or (if present) on Heavy Forwarders an index overriding to set the correct index value.

But if everyone con modify the UF conf files, you have a big problem!

I hint to use a Deployment Server (or another deployment system) to be sure that the correct configuration is deployed to the UFs.

Ciao.

Giuseppe

Sagittis
Explorer

Hello,

We want to allow people to use the oneshot command to ingest logs, sadly I have already considered overwriting the index but, it's not working for us since we have one index by "audit". However, the destination index always has the same pattern (for example, audit-XXXX). I haven't found a way to restrict everything coming in on port 9997 to the audit-XXXX indexes. To be honest, from a security point of view, It seems crazy that it's impossible to set restrictions. The only solution I can see now is to deploy a new forwarder on another host that only allows audit indexes as destinations.

Regards,

Sagittis

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Build the Future of Agentic AI: Join the Splunk Agentic Ops Hackathon

AI is changing how teams investigate incidents, detect threats, automate workflows, and build intelligent ...

[Puzzles] Solve, Learn, Repeat: Character substitutions with Regular Expressions

This challenge was first posted on Slack #puzzles channelFor BORE at .conf23, we had a puzzle question which ...

Splunk Community Badges!

  Hey everyone! Ready to earn some serious bragging rights in the community? Along with our existing badges ...