Have 1 indexer and 1 search head. Separate VM's. When trying to view indexed data from search head UI we receive the error "Problem replicating config (bundle) to search peer 'xxxxx.yyyy.com:8089', got http response code 400 HTTP/1.1 400 Unparsable URI-encoded request data.", and get "No results found" in the search window. We know there is indexed data because we can search it using the UI on the indexer itself.
Search Head (SH) splunkd.log shows:
ERROR DistributedBundleReplicationManager - got non-200 response from peer. uri=https://xxxxx.yyyy.com:8089, reply="HTTP/1.1 400 Unparsable URI-encoded request data" response_code=400
ERROR DistributedBundleReplicationManager - Unable to upload bundle to peer named with uri=https://xxxxx.yyyy.com:8089
Indexer splunkd_access.log shows the 400 error:
- - [13/Mar/2017:14:31:59.835 -0400] "POST /services/receivers/bundle/ HTTP/1.0" 400 153 - - - 0ms
There are no ERRORS in the splunkd.log on the Indexer.
There is physical connectivity to the host/port from the SH to the Indexer (we would see the log entry in the indexer if there was no connectivity).
What is going on, and how can we correct?
Here was resolution - we think. We did not have the Search Head configured as a License Slave, so it was not working with the same license that was installed on the Indexer (which we have designated as the license master). Once we set the search head to a license slave, the bundle distribution worked and we were able to search data in the index(es). Seems a more accurate error could be produced in these situations (without having to set any logging to the Debug level).
This can happen, when both systems have diverting License-Master/Licenses.
You should set the licensemaster to the same master_uri/same license pool.
On the License-Slave:
splunk edit licenser-localslave -master_uri https://<license-master>:8089
Or via server.conf:
[license]
master_uri = https://<licensemaster>:8089
However HTTP 400 Response is a very ambiguous message. Splunk should implement a proper Error-Message for this case.
Here was resolution - we think. We did not have the Search Head configured as a License Slave, so it was not working with the same license that was installed on the Indexer (which we have designated as the license master). Once we set the search head to a license slave, the bundle distribution worked and we were able to search data in the index(es). Seems a more accurate error could be produced in these situations (without having to set any logging to the Debug level).
Even though the License configurations are correct, you could have a bad route or firewall block. To test for this, you can do this:
[splunk@MyDeploymenServer]$ /usr/bin/echo > /dev/tcp/License.Master.IP.Here/8089 && /usr/bin/echo "master is reachable" || /usr/bin/echo "master is unreachable: $(/usr/bin/date)"
-bash: connect: No route to host
-bash: /dev/tcp/License.Master.IP.Here/8089: No route to host master is unreachable: Wed Dec 22 11:18:19 EST 2021
If the problem is resolved, please accept an answer.
I would log into the search head, remove the indexer as a search peer, then re-add it again. Sounds like something has become confused on the backend.
Correction: (we would see the log entry in the indexer if there was no connectivity), should state: (we would NOT see the log entry in the indexer if there was no connectivity)
Here was resolution - we think. We did not have the Search Head configured as a License Slave, so it was not working with the same license that was installed on the Indexer (which we have designated as the license master). Once we set the search head to a license slave, the bundle distribution worked and we were able to search data in the index(es). Seems a more accurate error could be produced in these situations (without having to set any logging to the Debug level).