Splunk Dev

replication was unsuccessful. replicationStatus Failed failure info: failed_because_NONE

rusty009
Path Finder

Hi ,

I'm seeing a weird issue in splunk– I am getting the below error on my search head,

• Unable to distribute to peer named pdc1secsplkidx3 at uri https://10.100.50.13:8089 because replication was unsuccessful. replicationStatus Failed failure info: failed_because_NONE
• Unable to distribute to peer named pdc1secsplkidx2 at uri https://10.100.50.12:8089 because replication was unsuccessful. replicationStatus Failed failure info: failed_because_NONE
• Unable to distribute to peer named pdc1secsplkidx1 at uri https://10.100.50.11:8089 because replication was unsuccessful. replicationStatus Failed failure info: failed_because_NONE

those are my three indexers, when I restart the searchead it works for about 5 minutes and then it doesn’t seem to be able to search any data.

I can search historic data (no earler than yesterday) fine on the CM, so I think there is an issue with the search head,

Where can i look further?

Tags (1)
0 Karma
1 Solution

muebel
SplunkTrust
SplunkTrust

Hi rusty009, this seems like an issue with the bundle size, or otherwise some issue with the search head transmitting the bundle to the indexer. This is a tarball of config files, which tends to grow over time, and usually gets tipped over the edge by large lookup files.

My recommendation is to look at the internal log for the indexers and search on

index=_internal method=POST status!=200 source="*splunkd_access.log"

If you find events indicating issues, you can configure replicationBlacklist in server.conf to stop certain files or directories from being included in the bundle, or otherwise remove unneeded files entirely.

View solution in original post

0 Karma

muebel
SplunkTrust
SplunkTrust

Hi rusty009, this seems like an issue with the bundle size, or otherwise some issue with the search head transmitting the bundle to the indexer. This is a tarball of config files, which tends to grow over time, and usually gets tipped over the edge by large lookup files.

My recommendation is to look at the internal log for the indexers and search on

index=_internal method=POST status!=200 source="*splunkd_access.log"

If you find events indicating issues, you can configure replicationBlacklist in server.conf to stop certain files or directories from being included in the bundle, or otherwise remove unneeded files entirely.

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

[Puzzles] Solve, Learn, Repeat: Matching cron expressions

This puzzle (first published here) is based on matching timestamps to cron expressions.All the timestamps ...

Design, Compete, Win: Submit Your Best Splunk Dashboards for a .conf26 Pass

Hello Splunkers,  We’re excited to kick off a Splunk Dashboard contest! We know that dashboards are a primary ...

May 2026 Splunk Expert Sessions: Security & Observability

Level Up Your Operations: May 2026 Splunk Expert Sessions Whether you are refining your security posture or ...