To stop indexing, you could change the minFreeSpace in server.conf, But it may also stop searches.
* Specified in megabytes.
* The default setting is 2000 (approx 2GB)
* Specifies a safe amount of space that must exist for splunkd to continue operating.
* Note that this affects search and indexing
* This is how the searching is affected:
* For search:
* Before attempting to launch a search, splunk will require this
amount of free space on the filesystem where the dispatch
directory is stored, $SPLUNK_HOME/var/run/splunk/dispatch
* Applied similarly to the search quota values in
authorize.conf and limits.conf.
* For indexing:
* Periodically, the indexer will check space on all partitions
that contain splunk indexes as specified by indexes.conf. Indexing
will be paused and a ui banner + splunkd warning posted to indicate
need to clear more disk space.
What about having each forwarder deployment sending to a different splunktcp port, then you can disable a specific port at a time to block the forwarders.
They will queue, then pause. and restart once the port reopen.
Yes, I do understand the hypocritical problems with my customers... 😞
I can get away with an indexer reboot (sometimes). The search heads are different boxes.
Loose data?! HAHAHAHAHAHA. We have forwarders on all systems, and never loose any data. Splunk is real good about catching up the next day when we turn the indexer back on.
Ok so you have a tricky problem. You have some customers that need access to their data - and they cannot accept downtime (for searches?). But they have no problems with you throwing away their logs once the license is 95% full!?!
Sounds like you need a bigger license. The customers should be happy to pay for it, given the circumstances...or am I missing something.
Having iptables drop anything on 9998 does not effect established sessions.
I wound up editing inputs.conf and restarting the indexer. Only a few customers noticed.
There has got to be a clean way of doing this, with out using iptables or restarting Splunk.
sorry, are you saying that the forwarder->indexer traffic (i.e. logs) changes ports from what you've defined in inputs.conf (on the indexer) and outputs.conf (on the forwarder)??? I didn't think that was possible.
I still don't.
You should be able to either block it in the local fw on the indexer or edit inputs.conf on the indexer and restart it. (as yannk says)
Also, as for restarting a production environment. You lose 2 minutes of searchability for the restart as opposed to losing 1 day to get a license reset key....
Cant block the port with iptables.
"All traffic is sent from forwarders on port 9998 (SSL). Once a connection is established, it moves to some random high level port. If it just stayed put, I would kill it with IPTables."
I am trying to build the automated approach (Not the manager). But for now, that will have to do.
[root@splk01 bin]# ./splunk set minfreemb 200000
You need to restart the Splunk Server (splunkd) for your changes to take effect.
Restarting the production environment it still not an option 😞
Disable the inputs or shutdown splunk.
An alternative is to setup a nullQueue filtering rule and turn it on to trash all your events.
Splunk do not throttle the indexing when the license is excessed, it keeps indexing but disable the search.
If you are an enterprise customer, you can go over, and once you have 5 days of warnings ask for a reset key.
To stop a listening port, just edit inputs.conf and disable the input ans restart to reload.
I have filters in place sending garbage data to nullQueue. Cant shutdown splunk, customers will get upset.
Cant seem to disable the input from the command line:
[root@splk01 bin]# ./splunk disable listen -port 9998
In handler 'cooked': Could not find config id for port 9998
All traffic is sent from forwarders on port 9998 (SSL). Once a connection is established, it moves to some random high level port. If it just stayed put, I would kill it with IPTables.
I know there is a clean way to do this. Splunk does it when it runs low on disk space.