We had the same issue, we looked in _internal and found that there was a large lookup:
index=_internal sourcetype=splunkd "is having problems pushing configurations to the search head cluster captain"
ERROR ConfReplicationThread [5001 ConfReplicationThread] - Error pushing configurations to captain=https://xxx.xxx.xx.xx:8089, consecutiveErrors=1678 msg="Error in acceptPush, uploading lookup_table_file="/apps/splunk/etc/apps/search/lookups/xxx.csv": Non-200 status_code=413: Content-Length of 5452600466 too large (maximum is 5000000000)": Search head cluster member (https://xxx.xxx.xx.xx:8089) is having problems pushing configurations to the search head cluster captain (https://xxx.xxx.xx.xx:8089). Changes on this member are not replicating to other members.
The lookup was 5GB, we decreased the size of the lookup and the error no longer appeared on the monitoring console or in _internal
That could be because member which is trying to notify changes happened to captain is out of sync from other members.
try to do rolling restart of search head cluster.