Hi @isoutamo , You may be aware that Splunk has its panel that records license warnings and breaches, but once the number of warnings/breaches (I assume it was 5) exceeds the 30-day limit, Splunk would cut the data intake, and the panels become unusable. To make sure that data isn't completely cut off, we at our company made an app that keeps track whenever we hit the mark of 3 breaches in a 30-day rolling period. So, upon hitting the mark, the port flip comes into action, and it flips the default receiving port from 9997 to XXXX. Some random alphabet because the indexer discovery will determine the new port as well, once the indexer is restarted. This strategy was initially implemented as a port switch from 9997 to 9998, and the inputs.conf was configured in the usual static way, where I mention the names of the <server>:<port> format, but later reformatted to suit the indexer discovery technique. What was strange about this technique was that we never had network issues in the search head during the classic forwarding technique, but noticed the same in the indexer discovery technique. Also, to make sure the problem exists only after the indexer discovery, I simulated the same in a test environment and noticed the worse network usage when the indexers are not reachable, but still the Search head was usable. The only difference between the two environments is that production has a lot of incoming data to the indexers, and the SH also acts as the license master to a lot of other sources where whereas the test environment doesn't do the same. The data flow begins again as we switch the ports back to 9997 after midnight, once the new day license period starts and the SH is back to its normal state.
... View more