Hi @PrewinThomas , Thanks for your response. We used to have a classic way of connecting the forwarder to peers, and we had the same port flip technique, but didn't have the SH slowness. This is ma...
See more...
Hi @PrewinThomas , Thanks for your response. We used to have a classic way of connecting the forwarder to peers, and we had the same port flip technique, but didn't have the SH slowness. This is making me doubt if I am missing something in the configuration of the indexer discovery. Also, to get back on the same issue, I always tried to check if the SH responds, but on the flipside, I never checked the indexers UI. So should it be the case where all the UI should fail due to cluster instability, and not just the SH? But when this was the case, I tried the SH restart, but of no use. To address license overages at source, reducing the ingestion or increasing the license is not possible at this stage because overages are a rare one-off scenario, but Splunk's license enforcement is something which I can do. Is there a way I can cut off the data when I am approaching a license breach? One more important thing to notice is how the Splunk license works. Splunk logs license through the _internal index, and meter gets data based on license.log, but if the license.log file is being indexed late with a time delay, still the license gets updated for the _time of the data, and not the indextime. Any thoughts on this process @livehybrid @PrewinThomas Thanks, Pravin