Deployment Architecture

How to reset error count after multiple network issues (multisite indexer cluster) ?

matthieu_araman
Communicator

Hello,

I've got a clustered indexers (2 sites) running 6.3

since today, the following kind of message appears in the console :
Search peer <search peer> has the following message: Too many streaming errors to target=<target
peer>. Not rolling hot buckets on further errors to this target. (This condition might exist with
other targets too. Please check the logs.)

I've followed this
http://docs.splunk.com/Documentation/Splunk/latest/Indexer/Bucketreplicationissues
by searching for the CMStreaming errors in _internal logs.

It looks like there was some network issue during the night (and may be at two other times, so the max number of errors is achieved)
Right now, it seems fixed (I can connect on the splunk replication port from a indexer to another) and the cluster say my replication factor is fine.

So I wan't to reset the counter to zero (not change the max value)

I've tried putting the cluster in maintenance mode and do a rolling restart

That doesn't seem effective.

Any idea how do I reset this ?

thanks

0 Karma

matthieu_araman
Communicator

answering to myself :

I've manually restarted splunk service on each indexer.
Error messages have disappeard right now but it took a while event after restart.
It looks like restarting the service is what makes the counter revert to zero but I'm not completely sure how this works...

0 Karma
Get Updates on the Splunk Community!

Stay Connected: Your Guide to July Tech Talks, Office Hours, and Webinars!

What are Community Office Hours?Community Office Hours is an interactive 60-minute Zoom series where ...

Updated Data Type Articles, Anniversary Celebrations, and More on Splunk Lantern

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

A Prelude to .conf25: Your Guide to Splunk University

Heading to Boston this September for .conf25? Get a jumpstart by arriving a few days early for Splunk ...