Deployment Architecture

How to reset error count after multiple network issues (multisite indexer cluster) ?

matthieu_araman
Communicator

Hello,

I've got a clustered indexers (2 sites) running 6.3

since today, the following kind of message appears in the console :
Search peer <search peer> has the following message: Too many streaming errors to target=<target
peer>. Not rolling hot buckets on further errors to this target. (This condition might exist with
other targets too. Please check the logs.)

I've followed this
http://docs.splunk.com/Documentation/Splunk/latest/Indexer/Bucketreplicationissues
by searching for the CMStreaming errors in _internal logs.

It looks like there was some network issue during the night (and may be at two other times, so the max number of errors is achieved)
Right now, it seems fixed (I can connect on the splunk replication port from a indexer to another) and the cluster say my replication factor is fine.

So I wan't to reset the counter to zero (not change the max value)

I've tried putting the cluster in maintenance mode and do a rolling restart

That doesn't seem effective.

Any idea how do I reset this ?

thanks

0 Karma

matthieu_araman
Communicator

answering to myself :

I've manually restarted splunk service on each indexer.
Error messages have disappeard right now but it took a while event after restart.
It looks like restarting the service is what makes the counter revert to zero but I'm not completely sure how this works...

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In November, the Splunk Threat Research Team had one release of new security content via the Enterprise ...

Index This | Divide 100 by half. What do you get?

November 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...

Stay Connected: Your Guide to December Tech Talks, Office Hours, and Webinars!

❄️ Celebrate the season with our December lineup of Community Office Hours, Tech Talks, and Webinars! ...