Building for the Splunk Platform

replication was unsuccessful. replicationStatus Failed failure info: failed_because_NONE

rusty009
Path Finder

Hi ,

I'm seeing a weird issue in splunk– I am getting the below error on my search head,

• Unable to distribute to peer named pdc1secsplkidx3 at uri https://10.100.50.13:8089 because replication was unsuccessful. replicationStatus Failed failure info: failed_because_NONE
• Unable to distribute to peer named pdc1secsplkidx2 at uri https://10.100.50.12:8089 because replication was unsuccessful. replicationStatus Failed failure info: failed_because_NONE
• Unable to distribute to peer named pdc1secsplkidx1 at uri https://10.100.50.11:8089 because replication was unsuccessful. replicationStatus Failed failure info: failed_because_NONE

those are my three indexers, when I restart the searchead it works for about 5 minutes and then it doesn’t seem to be able to search any data.

I can search historic data (no earler than yesterday) fine on the CM, so I think there is an issue with the search head,

Where can i look further?

Tags (1)
0 Karma
1 Solution

muebel
SplunkTrust
SplunkTrust

Hi rusty009, this seems like an issue with the bundle size, or otherwise some issue with the search head transmitting the bundle to the indexer. This is a tarball of config files, which tends to grow over time, and usually gets tipped over the edge by large lookup files.

My recommendation is to look at the internal log for the indexers and search on

index=_internal method=POST status!=200 source="*splunkd_access.log"

If you find events indicating issues, you can configure replicationBlacklist in server.conf to stop certain files or directories from being included in the bundle, or otherwise remove unneeded files entirely.

View solution in original post

0 Karma

muebel
SplunkTrust
SplunkTrust

Hi rusty009, this seems like an issue with the bundle size, or otherwise some issue with the search head transmitting the bundle to the indexer. This is a tarball of config files, which tends to grow over time, and usually gets tipped over the edge by large lookup files.

My recommendation is to look at the internal log for the indexers and search on

index=_internal method=POST status!=200 source="*splunkd_access.log"

If you find events indicating issues, you can configure replicationBlacklist in server.conf to stop certain files or directories from being included in the bundle, or otherwise remove unneeded files entirely.

0 Karma
Get Updates on the Splunk Community!

How I Instrumented a Rust Application Without Knowing Rust

As a technical writer, I often have to edit or create code snippets for Splunk's distributions of ...

Splunk Community Platform Survey

Hey Splunk Community, Starting today, the community platform may prompt you to participate in a survey. The ...

Observability Highlights | November 2022 Newsletter

 November 2022Observability CloudEnd Of Support Extension for SignalFx Smart AgentSplunk is extending the End ...