Getting Data In

Why am I getting error "Unexpected character while looking for value: '<'" using Indexer Discovery in a master node of an indexer cluster?


hi guys

I am just having a go with the new feature of Indexer Discovery at the master node of my 6.3 cluster.

I configured the following things:

In the master, server.conf I added:


In my Heavy Forwarder, I created the following outputs.conf under $SPLUNK_HOME/etc/apps/SplunkForwarder/local/

master_uri = https://<my_ip>:8089

autoLBFrequency = 30
forceTimebasedAutoLB = true
indexerDiscovery = master1

defaultGroup = group1
forwardedindex.filter.disable = true

I was trying to forward my _internal data, however, I am getting the following errors on splunkd.log

09-30-2015 16:45:50.639 +0100 ERROR HttpClientRequest - Caught exception while parsing HTTP reply: Unexpected character while looking for value: '<'
09-30-2015 16:45:50.639 +0100 ERROR IndexerDiscoveryHeartbeatThread - failed heartbeat for group=group1 uri=https://<my_ip>:8089/services/indexer_discovery http_response=Unauthorized

So, it's quite clear that there is a problem when the forwarder needs to contact the Indexer Discovery feature on the master.

When I try to browse https://<my_ip>:8089/services/indexer_discovery I get the following screen
alt text

Can you help me? What am I missing??

Splunk Employee
Splunk Employee

You need to set pass4SymmKey in the forwarder's [indexer_discovery] to match either [general] or [indexer_discovery]'s pass4SymmKey of CM.

On the other hand, your curl request is not valid. It should look something similar to this (It's a post and needs authentication):

curl -k -u admin:changeme -d "site=default" -d "guid=xxxx" https://localhost:8090/services/indexer_discovery


I did not set any [indexer_discovery]'s pass4SymmKey, by general you mean the cluster pass4SymmKey??

0 Karma

Splunk Employee
Splunk Employee

You must set the pass4SymmKey on the forwarders when using indexer discovery. The docs were incorrect on this issue, but have now been updated. See

0 Karma

Splunk Employee
Splunk Employee

Also, if you do not explicitly set pass4SymmKey in the cluster master's [indexer_discovery] stanza, the master will use the value in its [general] stanza - either a value that you have explicitly set there or the default value.

In either case, the forwarder's value in [indexer_discovery] must match that value.

Therefore, the simplest way to deal with this is to set pass4SymmKey in the [indexer_discovery] stanza on the master, as well as on all the forwarders.

0 Karma


hum...thanks for your help, but I think there is something else going on.
I copied the exact config from the doc example on my CM and on my HWF. As soon as I restart the HWF with the new config, it even stops indexing internal stuff so I have to manually go to splunkd.log and this is the ERROR that I get:

10-02-2015 10:52:43.908 +0100 ERROR IndexerDiscoveryHeartbeatThread - failed to parse response payload for group=group1, err=failed to extract FwdTarget from json node={"hostport":"?","ssl":false,"indexing_disk_space":23184490496}http_response=OK

any ideas? it seems there is something funky going on with the IndexerDiscovery but at the same time I don't get why the HWF stops indexing its own internal stuff

0 Karma

Super Champion

I had this same error, but in my case it was caused by something weird with one of the indexers. I ran the CURL command noted above and saw that one of the 4 indexes was missing (or "replaced with hostport="?"). So I went to the "missing" indexer, ran a splunk offline and now the UF started working correctly and the CURL command returns only a valid list of indexers. Very weird.

0 Karma

Super Champion

For anyone who cares (or next time I run into this issue), ... I found and resolved my issue.

I missed setting up a TCP Input on one of the 4 peer nodes. (This is why automation rocks, and doing stuff by hand is evil).

I found that if I hit the /services/cluster/config endpoint on all the peer nodes, the one that was causing issues was returning ? for forwarderdata_rcv_port. Apparently the cluster master sends this bogus value to the UF's via the /services/indexer_discovery endpoint. Whoops.

Find it in splunk like this:

| rest services/cluster/config | search mode=slave forwarderdata_rcv_port="?"


I got a similar issue. I was using the master to give itself a list of peers to send internal logs to however and not a heavy forwarder.

In this case I HAD use the password as listed in the [general] stanza and NOT the indexer_discovery one. Even though both could be set the indexer discovery one even when set with plaintext will fail.

I was getting the exact same error and have just resolved it in the past few minutes.

0 Karma


thanks for your input Lucas, I ended up doing the exact same thing, so I guess that either the documentation is wrong or this is a bug and a possible workaround?

0 Karma