Can someone please tell me what this means, and where I can look to fix this? Thanks!
I had a similar issue. Essentially if you happen to use a saved search behind a Dashboard, when you create it the search will be tied to your username. When this goes to prod, it will still be tied to your username and could count against your user’s limitations for search size, etc, and you might not realize it. So a best practice is to delete your ownership of the search in the local.meta or default.meta of the app prior to promoting your changes. You typically only have to do this once after search creation and it will stay owned by “nobody”.
I tried the solution of deleting the bundles and reattaching the indexer to search head. it worked but the error seem to be happening again. Is there a permanent solution to the problem.
Poorna
Had this on one search head and one of the indexers attached on 6.0.2.
Solution:
Thanks, my2ndhead!
We had this issue, and this answer led us in the right direction. We found specific issues between one of our SH and one of our Indexers:
Determined which SH was causing the issue (e.g. mySearchHead2) and which Indexer from the error message (e.g. myIndexer3)
Shutdown SH "mySearchHead2"
Go to the $SPLUNK_HOME/var/run/searchpeers directory on myIndexer3
On Indexer, do: rm -rf mySearchHead2*
Start SH mySearchHead2 back up
Problem solved (a whole new bundle will get sent from mySearchHead2 to myIndexer3 and any bad delta's are gone)
I'm having this same message running 6.0.1
[Messaging Indexer] Streamed search execute failed because: User 'nobody' could not act as:
2 Search Heads doing search head pooling, 3 Indexers Clustered and I'm getting this prompt for only 1 of my indexers.
Any thoughts on if this could be a symptomatic message due to other issues with that indexer? Hardware, I/O constraints accessing indexed data? For me the issue only pops up after some time.
Thanks
I too am looking for the solution to this, and have already logged couple of tickets with Splunk support.
I have just got confirmation that it is fixed in 6.0.1.
Going to upgrade now.
BUT, if you cannot upgrade, as a work around please add below mentioned lines in your search head
/opt/splunk/etc/system/local/distsearch.conf
[replicationSettings]
allowDeltaUpload = false
Making this change in distsearch.conf is the "Big Giant Hammer" solution to this...I suggest trying my2ndhead's steps below since that tries to address just the hiccup that caused this problem. This way you get rid of the bad-stuff that happened to be created instead of hiding the problem behind a setting change.
If you have big bundles, turning off the delta functionality will cause unnecessary overhead if the root cause was a minor hiccup in the bundle delta replication process.
I am using 6.0.2.2
I tried this workaround on an ad-hoc search head with multiple indexers and it worked for me.
I have not tried this on a search head pool, yet.
I am using 6.0.2.
We have 3 Search Heads and 4 Indexers with Search head pooling and mounted bundles.
I'm getting the same error.
Any idea on this?
This is fixed in 5.0.6 and 6.0.1 (not 6.0.0 obviously).
The distsearch.conf is a workaround to disable the whole feature.