Splunk Search

What does the following "red bar" message mean "Reached end-of-stream while waiting for more data from peer [INDEXER HOST NAME]. Search results might be incomplete"

kbecker
Communicator

This error has started showing up when searching back across larger data sets. we have several indexers and only one is showing up in the warning. Any ideas on what this means?

Thanks..

Tags (1)
1 Solution

kbecker
Communicator

Determined that mounted storage was not functioning properly. Thanks.

View solution in original post

gcoles
Communicator

It seems that my first comment (after deeboh) got truncated somehow. It is probably worth showing what my knowledge bundle tree structure looks like to complement the distsearch.conf on the peers, etc:

/opt/splunk_shared_config/etc
├── apps
├── system -> ../lh001/system
└── users
/opt/splunk_shared_config/lh001
├── apps -> ../etc/apps
├── system
└── users -> ../etc/users
/opt/splunk_shared_config/lh002
├── apps -> ../etc/apps
├── system
└── users -> ../etc/users

The first parent folder is used for search pooling on the heads. That has the apps and users directory used in the pooling. The next two parent folders are the bundle folders, one per search head, and referenced in distsearch.conf on the peers/indexers. The system directories in those folders are copies of the system folders on the heads. The apps and users folders in each of these bundle folders are symbolic links back to the live versions used for search pooling.

From a networking perspective, my peers/indexers have single NIC card on VLAN X. My heads have two NIC cards -- one on X, and one on Y. Users connect to heads via a load balancer on VLAN Y, the heads use VLAN X to connect to the peers. I have used packet tracing (aka tcpdump) to verify that the packets are flowing over the correct VLANs. The traffic is SSL encrypted but appears to be normal in terms of TCP. Also I verified that the keys for the two search heads are present on the peers in $SPLUNK_HOME/etc/auth/distServerKeys/{SEARCHHEAD}.

I am searching on index=_internal to avoid any differences in the indexing config between the heads and peers. I reduced the peer list to a single host for testing. The logs on the peer show API requests coming in from the heads and normal 200/201 responses, no auth failures, etc. This issue is present when searching on either head.

0 Karma

gcoles
Communicator

Thanks gekoner. Yeah this is a weird one. I/O hopefully isn't the cause as its a dedicated filer for this purpose, and we don't have any users on the new cluster yet (or data coming in).

Interestingly, before I had configured mounted bundles, I had added these indexers as normal search peers, and all worked properly. So definitely this is something to do with Shared Bundles, the filer, directory paths/permissions, etc. I wish the docs were more explicit in this area.

I've opened a support ticket with splunk, I will post an answer here after we've worked it out.

0 Karma

gekoner
Communicator

gcoles, Here are my 2 guesses at what your issue might be. Since this is such a generic error and you have a somewhat complex setup, I can only take a stab at your exact issue.
1) Your search head is having communication issues with the Index data. Either because of the network setup or some network communication(timeout) issue, OR your Sun NFS filer has errors or has MAXED it's I/O.
2) There is some bug with Splunk that is specific to your config. You are going to have to get Splunk support to give any further details here.

Hope that gives you something to go on. It's all I can come up with

0 Karma

gcoles
Communicator

To continue -- our search heads are set to look at the /opt/splunk_shared_config/etc folder. The indexers are configured with this distsearch.conf:

[searchhead:lh001]
mounted_bundles = true
bundles_location = /opt/splunk_shared_config/lh001

[searchhead:lh002]
mounted_bundles = true
bundles_location = /opt/splunk_shared_config/lh002

This was the best I could glean about the correct setup from here:

http://splunk-base.splunk.com/answers/30643/help-with-search-head-pooling-mounted-knowledge-bundle

0 Karma

gcoles
Communicator

Just tried out my above-mentioned theory, didn't pan out.

0 Karma

gcoles
Communicator

Thanks for the quick reply, gekoner.
Yes I can read/write to any of the directories on the /opt/splunk_shared_config mount point from either the search heads or the peers. I am using a Sun NFS filer to share the mount point with the hosts. I have the $SPLUNK_HOME/etc/system folders from each search head copied onto the filer. Do you think a live copy of the heads' system directories is required? If so I could move system onto the filer for each head, and symlink it back to the $SPLUNK_HOME/etc/system path. The docs are very unclear about this setup, unfortunately.

0 Karma

gekoner
Communicator

Did you check READs/WRITEs from your Splunk Indexer servers to these locations?

0 Karma

kbecker
Communicator

Determined that mounted storage was not functioning properly. Thanks.

deeboh
Path Finder

Hey kbecker. Can you elaborate on what storage issue you solved? Do you have a particular type of monitor on your storage that helped you solve this issue? Are you using local storage, SAN or NAS?

Thanks

0 Karma

gekoner
Communicator

It means that one of your Indexers is not communicating all of the Indexed data to your Search Head. This could be because an Index is no longer available or no longer exists or because the data doesn't have the right EOF set. It could also mean you have network or communication issues with that Indexer.

Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...