Splunk Search

[indexer] Streamed search execute failed because: User 'nobody' could not act as:


Can someone please tell me what this means, and where I can look to fix this? Thanks!

Tags (1)
0 Karma


I had a similar issue. Essentially if you happen to use a saved search behind a Dashboard, when you create it the search will be tied to your username. When this goes to prod, it will still be tied to your username and could count against your user’s limitations for search size, etc, and you might not realize it. So a best practice is to delete your ownership of the search in the local.meta or default.meta of the app prior to promoting your changes. You typically only have to do this once after search creation and it will stay owned by “nobody”.


I tried the solution of deleting the bundles and reattaching the indexer to search head. it worked but the error seem to be happening again. Is there a permanent solution to the problem.


0 Karma


Had this on one search head and one of the indexers attached on 6.0.2.


  • Remove the indexer from distributed search
  • Go to $SPLUNK_HOME/var/run/searchpeers and delete all bundle artifacts for the search head in question
  • Re-attach the indexer to the search head.

Path Finder

Thanks, my2ndhead!

We had this issue, and this answer led us in the right direction. We found specific issues between one of our SH and one of our Indexers:

  1. Determined which SH was causing the issue (e.g. mySearchHead2) and which Indexer from the error message (e.g. myIndexer3)

  2. Shutdown SH "mySearchHead2"

  3. Go to the $SPLUNK_HOME/var/run/searchpeers directory on myIndexer3

  4. On Indexer, do: rm -rf mySearchHead2*

  5. Start SH mySearchHead2 back up

  6. Problem solved (a whole new bundle will get sent from mySearchHead2 to myIndexer3 and any bad delta's are gone)


I'm having this same message running 6.0.1

[Messaging Indexer] Streamed search execute failed because: User 'nobody' could not act as:

2 Search Heads doing search head pooling, 3 Indexers Clustered and I'm getting this prompt for only 1 of my indexers.

Any thoughts on if this could be a symptomatic message due to other issues with that indexer? Hardware, I/O constraints accessing indexed data? For me the issue only pops up after some time.


0 Karma

Path Finder

I too am looking for the solution to this, and have already logged couple of tickets with Splunk support.

I have just got confirmation that it is fixed in 6.0.1.

Going to upgrade now.

BUT, if you cannot upgrade, as a work around please add below mentioned lines in your search head



allowDeltaUpload = false

Path Finder

Making this change in distsearch.conf is the "Big Giant Hammer" solution to this...I suggest trying my2ndhead's steps below since that tries to address just the hiccup that caused this problem. This way you get rid of the bad-stuff that happened to be created instead of hiding the problem behind a setting change.

If you have big bundles, turning off the delta functionality will cause unnecessary overhead if the root cause was a minor hiccup in the bundle delta replication process.

0 Karma

Path Finder

I am using

I tried this workaround on an ad-hoc search head with multiple indexers and it worked for me.

I have not tried this on a search head pool, yet.

0 Karma


I am using 6.0.2.
We have 3 Search Heads and 4 Indexers with Search head pooling and mounted bundles.
I'm getting the same error.

Any idea on this?

0 Karma

Splunk Employee
Splunk Employee

This is fixed in 5.0.6 and 6.0.1 (not 6.0.0 obviously).

The distsearch.conf is a workaround to disable the whole feature.

0 Karma
Did you miss .conf21 Virtual?

Good news! The event's keynotes and many of its breakout sessions are now available online, and still totally FREE!