I have an indexer getting data from 24 hosts. We were well within our quota
until two hosts were added that, for whatever reason (misconfiguration, extremely
busy, etc.) are sending many GB. I have no control over these forwarders, I have to
wait for their admins to fix/reconfigure them. To keep from going over quota, I've
disabled port 9997 on the indexer until I can touch base with those admins. But is
there a way to stop accepting data from just those two offenders without shutting off
the other 24 forwarders? I'm at version 4.3, if that matters.
Splunk should have a configurable way to do this at the indexer level. A rogue forwarder can easily take out an indexer by sending crap data or large volumes of data.
Need a little bit more information - are these hosts writing to a specific index? Are there specific file source that's causing the issues?
If you have the host, you can do something like this on the indexer side:
transforms.conf: [block_transform] REGEX=DEBUG\s\[ DEST_KEY = queue FORMAT = nullQueue props.conf: [host::yourservername] TRANSFORMS-bad_log = block_transform
This was taken from http://splunk-base.splunk.com/answers/11617/route-unwanted-logs-to-a-null-queue