<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Search peer and search process errors in Splunk Search</title>
    <link>https://community.splunk.com/t5/Splunk-Search/Search-peer-and-search-process-errors/m-p/410490#M118441</link>
    <description>&lt;P&gt;hello, &lt;/P&gt;

&lt;P&gt;Our physical servers had to restart and as such the splunk servers dropped. &lt;/P&gt;

&lt;P&gt;we are now having issues on our cluster master and our indexers. &lt;/P&gt;

&lt;P&gt;our deployment looks like this, &lt;/P&gt;

&lt;P&gt;DCAXXXG013 CM and LM&lt;BR /&gt;
DCAXXXG014 IDX&lt;BR /&gt;
DCAXXXG015 IDX&lt;BR /&gt;
DCAXXXG016 IDX&lt;BR /&gt;
DCAXXXG017 SH&lt;/P&gt;

&lt;P&gt;DCPXXXG013 DS&lt;BR /&gt;
DCPXXXG014 IDX&lt;BR /&gt;
DCPXXXG015 IDX&lt;BR /&gt;
DCPXXXG016 IDX&lt;BR /&gt;
DCPXXXG017 SH&lt;/P&gt;

&lt;P&gt;The indexers on site A and Site P are both clustered. just wondering if anyone can shed some light on where to go and how to progress from here if possible. &lt;/P&gt;

&lt;P&gt;Search peer DCAOVSG016 has the following message: Failed to register with cluster master reason: failed method=POST path=/services/cluster/master/peers/?output_mode=json master=dcaovsg013:8089 rv=0 gotConnectionError=0 gotUnexpectedStatusCode=1 actual_response_code=500 expected_response_code=2xx status_line="Internal Server Error" socket_error="No error" remote_error=Cannot add peer=172.26.10.49 mgmtport=8089 (reason: non-zero pending job count=1, guid=ADA4AE8A-B93F-48E2-88CC-F47CDDCB9AE4). [ event=addPeer status=retrying AddPeerRequest: { _id= active_bundle_id=EDA5C78B2096F563800873D7CBD2A6DF add_type=ReAdd-As-Is base_generation_id=2073 batch_serialno=1 batch_size=3 forwarderdata_rcv_port=9197 forwarderdata_use_ssl=0 last_complete_generation_id=2077 latest_bundle_id=EDA5C78B2096F563800873D7CBD2A6DF mgmt_port=8089 name=ADA4AE8A-B93F-48E2-88CC-F47CDDCB9AE4 register_forwarder_address= register_replication_address= register_search_address= replication_port=9100 replication_use_ssl=0 replications= server_name=DCAOVSG016 site=site1 splunk_version=7.2.0 splunkd_build_number=8c86330ac18 status=Up } ].&lt;/P&gt;

&lt;P&gt;Indexer Clustering: The search process with sid=rt_scheduler_&lt;EM&gt;admin_QkNOX1RBX1dpbmRvd3MtU2VydmVycw&lt;/EM&gt;_RMD5d0958093cdddf4f3_at_1551270120_1818 on peer=DCAOVSG014 may have returned partial results due to a reading error while waiting for the peer. This can occur if the peer unexpectedly closes or resets the connection during a planned restart. Try running the search again. Learn more.&lt;BR /&gt;
2/27/2019, 12:22:34 PM&lt;/P&gt;

&lt;P&gt;Search peer DCAOVSG014 has the following message: Failed to register with cluster master reason: failed method=POST path=/services/cluster/master/peers/?output_mode=json master=dcaovsg013:8089 rv=0 gotConnectionError=0 gotUnexpectedStatusCode=1 actual_response_code=500 expected_response_code=2xx status_line="Internal Server Error" socket_error="No error" remote_error=Cannot add peer=172.26.10.47 mgmtport=8089 (reason: non-zero pending job count=2, guid=3724715E-6BAC-46F9-AFE7-06917EF3FD3C). [ event=addPeer status=retrying AddPeerRequest: { _id= active_bundle_id=EDA5C78B2096F563800873D7CBD2A6DF add_type=ReAdd-As-Is base_generation_id=2086 batch_serialno=1 batch_size=2 forwarderdata_rcv_port=9197 forwarderdata_use_ssl=0 last_complete_generation_id=2093 latest_bundle_id=EDA5C78B2096F563800873D7CBD2A6DF mgmt_port=8089 name=3724715E-6BAC-46F9-AFE7-06917EF3FD3C register_forwarder_address= register_replication_address= register_search_address= replication_port=9100 replication_use_ssl=0 replications= server_name=DCAOVSG014 site=site1 splunk_version=7.2.0 splunkd_build_number=8c86330ac18 status=Up } ].&lt;/P&gt;

&lt;P&gt;Any help is greatly appreciated. &lt;/P&gt;

&lt;P&gt;Cheers&lt;/P&gt;</description>
    <pubDate>Tue, 29 Sep 2020 23:27:14 GMT</pubDate>
    <dc:creator>willsy</dc:creator>
    <dc:date>2020-09-29T23:27:14Z</dc:date>
    <item>
      <title>Search peer and search process errors</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Search-peer-and-search-process-errors/m-p/410490#M118441</link>
      <description>&lt;P&gt;hello, &lt;/P&gt;

&lt;P&gt;Our physical servers had to restart and as such the splunk servers dropped. &lt;/P&gt;

&lt;P&gt;we are now having issues on our cluster master and our indexers. &lt;/P&gt;

&lt;P&gt;our deployment looks like this, &lt;/P&gt;

&lt;P&gt;DCAXXXG013 CM and LM&lt;BR /&gt;
DCAXXXG014 IDX&lt;BR /&gt;
DCAXXXG015 IDX&lt;BR /&gt;
DCAXXXG016 IDX&lt;BR /&gt;
DCAXXXG017 SH&lt;/P&gt;

&lt;P&gt;DCPXXXG013 DS&lt;BR /&gt;
DCPXXXG014 IDX&lt;BR /&gt;
DCPXXXG015 IDX&lt;BR /&gt;
DCPXXXG016 IDX&lt;BR /&gt;
DCPXXXG017 SH&lt;/P&gt;

&lt;P&gt;The indexers on site A and Site P are both clustered. just wondering if anyone can shed some light on where to go and how to progress from here if possible. &lt;/P&gt;

&lt;P&gt;Search peer DCAOVSG016 has the following message: Failed to register with cluster master reason: failed method=POST path=/services/cluster/master/peers/?output_mode=json master=dcaovsg013:8089 rv=0 gotConnectionError=0 gotUnexpectedStatusCode=1 actual_response_code=500 expected_response_code=2xx status_line="Internal Server Error" socket_error="No error" remote_error=Cannot add peer=172.26.10.49 mgmtport=8089 (reason: non-zero pending job count=1, guid=ADA4AE8A-B93F-48E2-88CC-F47CDDCB9AE4). [ event=addPeer status=retrying AddPeerRequest: { _id= active_bundle_id=EDA5C78B2096F563800873D7CBD2A6DF add_type=ReAdd-As-Is base_generation_id=2073 batch_serialno=1 batch_size=3 forwarderdata_rcv_port=9197 forwarderdata_use_ssl=0 last_complete_generation_id=2077 latest_bundle_id=EDA5C78B2096F563800873D7CBD2A6DF mgmt_port=8089 name=ADA4AE8A-B93F-48E2-88CC-F47CDDCB9AE4 register_forwarder_address= register_replication_address= register_search_address= replication_port=9100 replication_use_ssl=0 replications= server_name=DCAOVSG016 site=site1 splunk_version=7.2.0 splunkd_build_number=8c86330ac18 status=Up } ].&lt;/P&gt;

&lt;P&gt;Indexer Clustering: The search process with sid=rt_scheduler_&lt;EM&gt;admin_QkNOX1RBX1dpbmRvd3MtU2VydmVycw&lt;/EM&gt;_RMD5d0958093cdddf4f3_at_1551270120_1818 on peer=DCAOVSG014 may have returned partial results due to a reading error while waiting for the peer. This can occur if the peer unexpectedly closes or resets the connection during a planned restart. Try running the search again. Learn more.&lt;BR /&gt;
2/27/2019, 12:22:34 PM&lt;/P&gt;

&lt;P&gt;Search peer DCAOVSG014 has the following message: Failed to register with cluster master reason: failed method=POST path=/services/cluster/master/peers/?output_mode=json master=dcaovsg013:8089 rv=0 gotConnectionError=0 gotUnexpectedStatusCode=1 actual_response_code=500 expected_response_code=2xx status_line="Internal Server Error" socket_error="No error" remote_error=Cannot add peer=172.26.10.47 mgmtport=8089 (reason: non-zero pending job count=2, guid=3724715E-6BAC-46F9-AFE7-06917EF3FD3C). [ event=addPeer status=retrying AddPeerRequest: { _id= active_bundle_id=EDA5C78B2096F563800873D7CBD2A6DF add_type=ReAdd-As-Is base_generation_id=2086 batch_serialno=1 batch_size=2 forwarderdata_rcv_port=9197 forwarderdata_use_ssl=0 last_complete_generation_id=2093 latest_bundle_id=EDA5C78B2096F563800873D7CBD2A6DF mgmt_port=8089 name=3724715E-6BAC-46F9-AFE7-06917EF3FD3C register_forwarder_address= register_replication_address= register_search_address= replication_port=9100 replication_use_ssl=0 replications= server_name=DCAOVSG014 site=site1 splunk_version=7.2.0 splunkd_build_number=8c86330ac18 status=Up } ].&lt;/P&gt;

&lt;P&gt;Any help is greatly appreciated. &lt;/P&gt;

&lt;P&gt;Cheers&lt;/P&gt;</description>
      <pubDate>Tue, 29 Sep 2020 23:27:14 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Search-peer-and-search-process-errors/m-p/410490#M118441</guid>
      <dc:creator>willsy</dc:creator>
      <dc:date>2020-09-29T23:27:14Z</dc:date>
    </item>
    <item>
      <title>Re: Search peer and search process errors</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Search-peer-and-search-process-errors/m-p/410491#M118442</link>
      <description>&lt;P&gt;You'll want to check the logs on &lt;CODE&gt;dcaovsg013&lt;/CODE&gt; because it's returning 500 errors  ( &lt;CODE&gt;actual_response_code=500&lt;/CODE&gt; )  because of &lt;CODE&gt;reason: non-zero pending job&lt;/CODE&gt; - there's probably some outstanding issue or load on that machine.&lt;/P&gt;</description>
      <pubDate>Wed, 27 Feb 2019 23:17:17 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Search-peer-and-search-process-errors/m-p/410491#M118442</guid>
      <dc:creator>terminaloutcome</dc:creator>
      <dc:date>2019-02-27T23:17:17Z</dc:date>
    </item>
    <item>
      <title>Re: Search peer and search process errors</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Search-peer-and-search-process-errors/m-p/581800#M202679</link>
      <description>&lt;P&gt;were you able to fix this issue? if yes, please share solution. Thanks.&lt;/P&gt;</description>
      <pubDate>Thu, 20 Jan 2022 02:39:49 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Search-peer-and-search-process-errors/m-p/581800#M202679</guid>
      <dc:creator>dm1</dc:creator>
      <dc:date>2022-01-20T02:39:49Z</dc:date>
    </item>
  </channel>
</rss>

