Deployment Architecture

Why are we seeing slow performance of the Search app and error "Failed to fetch rest endpoint..." in our search head cluster?

banderson7
Communicator

Three Splunk 6.3 search heads + 1 6.3 deployer/deployment-server & license server + 2 6.3 indexers. Has been up for 4 months.

~1 month ago, it started taking up to 1 minute to open the Search and Reporting app on all 3 search heads (coming from the vip address, as well as going to each server individually). Additionally, once in the search app, it takes up to 1 minute to open the dashboard link. All other apps load fine.
I'm uncertain if this is related, but I'm getting the following error message consistently in my search head cluster distributed management console. On the captain status, which shows the instance, uri and time elected captain, there's a error triangle, and clicking on it reveals:

Failed to fetch REST endpoint uri=https://127.0.0.1/services/shcluster/captain/info?count=0 from server=https://127.0.0.1:8089

and on the 3 members' status, there's another triangle. Clicking on that reveals:

Failed to fetch REST endpoint uri=https://127.0.0.1:8089/services/shcluster/member/members?count=0 from server=https://127.0.0.1:8089
Unexpected status for to fetch REST endpoint uri=https://127.0.0.1:8089/services/shcluster/member/members?count=0 from server=https://127.0.0.1:8089 - Service Unavailable

I'd appreciate any help that you can suggest. I've deleted the navigation views for all three search heads, didn't help.

0 Karma
1 Solution

banderson7
Communicator

Slowness turned out to be a bad connection to an ldap auth provider. Auth would still occur from that server, but every time splunk search was in use, it felt like polling the ldap first, and that one server made life miserable.
The weird REST messages turns out to be a bug in the DMC, #SPL-108633:CLONE .
Thanks for the reply 🙂

View solution in original post

banderson7
Communicator

Slowness turned out to be a bad connection to an ldap auth provider. Auth would still occur from that server, but every time splunk search was in use, it felt like polling the ldap first, and that one server made life miserable.
The weird REST messages turns out to be a bug in the DMC, #SPL-108633:CLONE .
Thanks for the reply 🙂

jplumsdaine22
Influencer

Glad you got sorted out!

0 Karma

jplumsdaine22
Influencer

Not sure about the slowness, but the error message might be caused by incorrect labelling of the SHC Deployer. In the DMC, under Settings/General Setup (ie the inventory screen) make sure there is no Search Head Cluster Label attached to the server that has the SHC Deployer role server

dhawal_sanghvi
New Member

I am getting the same error. Should I remove the Search head cluster label attached to the server that has the SHC Deployer role? If I do so and make the changes, I get the following warning - "At least one of your instances is a search head deployer without a search head cluster label. We recommend you edit these instances to set their search head cluster labels. " ? How should I fix - Unexpected status for to fetch REST endpoint uri=https://127.0.0.1:8089/services/shcluster/member/members?count=0 from server=https://127.0.0.1:8089 - Unauthorized

0 Karma

cborgal
Explorer

This fixed the same REST error messages for me.

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...