Deployment Architecture

How to reset error count after multiple network issues (multisite indexer cluster) ?

matthieu_araman
Communicator

Hello,

I've got a clustered indexers (2 sites) running 6.3

since today, the following kind of message appears in the console :
Search peer <search peer> has the following message: Too many streaming errors to target=<target
peer>. Not rolling hot buckets on further errors to this target. (This condition might exist with
other targets too. Please check the logs.)

I've followed this
http://docs.splunk.com/Documentation/Splunk/latest/Indexer/Bucketreplicationissues
by searching for the CMStreaming errors in _internal logs.

It looks like there was some network issue during the night (and may be at two other times, so the max number of errors is achieved)
Right now, it seems fixed (I can connect on the splunk replication port from a indexer to another) and the cluster say my replication factor is fine.

So I wan't to reset the counter to zero (not change the max value)

I've tried putting the cluster in maintenance mode and do a rolling restart

That doesn't seem effective.

Any idea how do I reset this ?

thanks

0 Karma

matthieu_araman
Communicator

answering to myself :

I've manually restarted splunk service on each indexer.
Error messages have disappeard right now but it took a while event after restart.
It looks like restarting the service is what makes the counter revert to zero but I'm not completely sure how this works...

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...