Hello everyone. I'm getting Forced bundle replication failed. Reverting to old behavior - using most recent bundles on all on a search head, and I'm not sure how to fix this. I excluded heavy files from the bundle, also restarted the search head, but nothing changes. Where should I dig? I wasn't able to find this error message in Splunk documentation and on the internet. The closest topic on Splunk answers was related to search head clustering, but since I wasn't setting up SH clustering, I guess it's not applicable.
Additional info. Before the issue occurred, I've noticed that disk usage on indexers went to 100%. I solved it by deleting data from /opt/splunk/var/run/searchpeers (except the latest files).
- 4 indexer VMs.
- 2 search head VMs (not clustered, just testing Splunk 7 and Splunk 8 in parallel). 4 indexers are connected as distributed search peers to each of those search heads.
- No deployment server in use.
Sometimes network connection is not good between indexers and search head, so maybe it contributes somehow.
Any suggestions and ideas appreciated.
Try cycling the index master, then a rolling restart of the indexer cluster. Once the cluster is back up try to re-validate the new bundle via the master.
If that doesn't work, make a small change in your bundle somewhere like a add/modify a readme text file, etc.
That's enough to cause the master to see it as a new bundle and re-validate.
Using the GUI on the master is actually the easiest/best way to do the restart, cycling, and bundle validation/push, in my opinion. Just fyi.
Thanks for your reply. What should I do if it isn't an indexer cluster? Those indexers are standalone, so there's no replication going on, and no indexer master is present.
Anyway, I think restarting indexers might be a good idea, so I'll try it.