Splunk Enterprise Security

Unable to distribute to peer from search head

raghu_vedic
Path Finder

Hi,

We are facing this issue frequently in splunk search head. Please help me.

Unable to distribute to peer named XXXXXXat uri https://XX.XX.XX.XX:8089 because replication was unsuccessful. replicationStatus Failed failure info: Dispatch Command: Search bundle throttling is occurring because the limit for number of bundles with pending lookups for indexing has been exceeded. This could be the result of large lookup files updating faster than Splunk software can index them. Throttling ends when this instance has caught up with indexing of lookups. If you see this often, contact your Splunk administrator about tuning lookup sizes and max_memtable_bytes. Please verify connectivity to the search peer, that the search peer is up, and an adequate level of system resources are available. See the Troubleshooting Manual for more information.

0 Karma

ryhluc01
Communicator

Hey can you choose @maciep 's anwser? It's thoughtfully detailed.

0 Karma

maciep
Champion

This is my understanding of how things work, what the problem could be, and what could be some solutions...

Problem
When you run a search in splunk, a zipped version of the search head's config is sent down to the indexers ($splunkhome/var/run/searchpeers on the indexer). This is the bundle the message is referring to. The more often things change on your search head, the more often you'll send an updated bundle (or deltas to that bundle).

So that bundle includes lookup files from your search head. However, if a lookup is over a certain size, then Splunk will create an indexed version of the lookup. It sounds like this could be happening a lot on your indexers. Once it downloads a bundle, it has to index the lookups because they're too large (which can be time consuming). And apparently at some point, if the indexer is in the process of indexing lookups in too many bundles, you get that message?

Possible Solutions
First, try to identify any lookups greater than 20MB (I believe that's the default lookup size limit before indexing kicks off).

One, determine if you can exlcude it from the bundle. Meaning, do any of your searches actually need the lookup. For example, in our environment, we have large lookups for assets. However, since ES merges all of those lists into one big list it uses, then we don't need to include our lookups in the search bundle. If you don't need it, then you can exclude it in distsearch.conf on your search heads I believe (replication black list setting I think)

Two, if you do need it in the bundle, how often are you updating it? And can you update it less often? If you update less often, then you should send bundles/deltas less often as well.

Three, increase the that lookup-indexing limit. That's the max_memtable_bytes setting the message is referring to, which is in limits.conf. However, I believe that needs to be done on your indexer, not your search head. I believe it is the indexer that is busy indexing the lookups coming from the bundles. Keep in mind, the lookups are being indexed in order to improve performance, so increasing that limit could affect lookup performance as well.

Get Updates on the Splunk Community!

What's New in Splunk Enterprise 9.4: Features to Power Your Digital Resilience

Hey Splunky People! We are excited to share the latest updates in Splunk Enterprise 9.4. In this release we ...

Take Your Breath Away with Splunk Risk-Based Alerting (RBA)

WATCH NOW!The Splunk Guide to Risk-Based Alerting is here to empower your SOC like never before. Join Haylee ...

SignalFlow: What? Why? How?

What is SignalFlow? Splunk Observability Cloud’s analytics engine, SignalFlow, opens up a world of in-depth ...