@earlhelms ,
I'm one of the Support Engineers for Splunk and this is something I just dealt with on another case a day or two ago.
If you're having issues with refresh queue job errors, you can try to access the endpoint directly to see if any jobs ARE in queue. (listed below)
https://<yourhostname>:<mgmtport>/servicesNS/nobody/SA-ITOA/storage/collections/data/itsi_refresh_queue/
If a job is sitting in there, you can delete the job with a curl command against the specific job ID. (listed below)
curl -k -u admin:changeme -X DELETE https://<yourhostname>:<mgmtport>/servicesNS/nobody/SA-ITOA/storage/collections/data/itsi_refresh_queue/<jobID>;
Alternatively, if that doesn't work, you can just exclude the jobID off of the curl command and clear the whole refresh queue. Clearing the whole queue does have a caveat. If you had some jobs that were committed into the queue, you may have to go back and re-do them as the job that was going to commit them to the proper kvstore collection(s) has now been removed.
Alternatively, if the job(s) are no longer in queue, then they have been naturally removed from the queue and the messages that you are seeing in the UI can be disregarded. I'm looking into whether or not we have a value to lower the overall amount of messages you're receiving for that specific error or if this is something we may need to file an enhancement request over. I'll update if I find anything. Cheers!
EDIT: I managed to get a dev's ear about the frequency of the errors. It looks like we run a search in the background once every ~30 minutes specifically to check for any refresh queue job issues. According to the devs, we removed this message in either 4.2 or 4.3 so that the UI didn't get so bogged down with erroneous messages.
... View more