Deployment Architecture

Indexer discovery where the client is the master itself

Lucas_K
Motivator

As per best practice we store all internal splunk logs all across the indexing tier. This gives you all the good things like dist search, replication etc to your internal splunk logs.

With the newer versions of splunk we can use indexer discovery against the indexing cluster master to find what indexers are available so that we spread the data across all available peers.

Example.

[tcpout]
defaultGroup = prod
disabled = false
forwardedindex.filter.disable = true


[tcpout:prod]
indexerDiscovery = prod
autoLB = true
forceTimebasedAutoLB = true
maxQueueSize = 100MB

[indexer_discovery:prod]
master_uri = https://cluster-master:8089
pass4SymmKey = blah

This works fine on every instance of splunk (search heads, forwarders, deployer, deployment server etc) except for masters.

If used on the cluster master itself it will not work.

Can it be done so that the master itself will utilise its own list of available cluster members to send its internals to?

mathiask
Communicator

It is possible to set up the Cluster Master using Indexer Discovery

  1. Setup the Cluster Master as Indexer Discovery Master like usual
  2. Configure Cluster Master as Indexer Discovery Peer

server.conf

[indexer_discovery]
pass4SymmKey = <indexer discovery key>

outputs.conf

# Turn off indexing on the master
[indexAndForward]
index = false

# TCP output global
[tcpout]
defaultGroup = cluster
forwardedindex.filter.disable = true
indexAndForward = false

# TCP output cluster group
[tcpout:cert-cluster]
indexerDiscovery = cluster-master
forceTimebasedAutoLB = true
useACK = true

# indexer discovery group
[indexer_discovery:cluster-master]
master_uri = https://<Cluster Master IP/hosname>:8089
pass4SymmKey = <indexer discovery key>

There is one problem though, that I have with every client that uses Indexer Discovery and useACK = true. When Indexers are restarted the clients forwarding start produce duplicates until the forwarding clients are restarted... any solution welcome

guilmxm
SplunkTrust
SplunkTrust

@mathiask

This can be easily solved:

  • Create small configuration apps, each configuration apps will manage specific settings, and push apps were you need them

For instance:

  • indexer_discovery_control --> contains the necessary configuration to get the list of indexer through discovery, ex:

outputs.conf

[indexer_discovery:master1]
pass4SymmKey = this_is_the_secret_indexer_discovery_key
master_uri = https://master_ip:8089

[tcpout:group1]
autoLBFrequency = 30
forceTimebasedAutoLB = true
indexerDiscovery = master1

[tcpout]
defaultGroup = group1
  • forwarder_Ack_control --> Activates Ack where it is deployed, ex:

server.conf

[tcpout:group1]
useACK=true

Deploy the indexer_discovery_control everywhere, but the indexers.
Deploy the forwarders_Ack_control everywhere you want Ack to be activated.

As such, if you have multi-site cluster, create a small app for each site containing a server.conf with site definition, and deploy these apps depending on the sites of instances.

Note that defining a site is mandatory in multi-site, or the indexer discovery won't work. (not true for master node, search heads in SHC)

Hope this helps,

Guilhem

0 Karma

mathiask
Communicator

To clarify
Setting up the config is not the problem. Splitting it into different apps etc was also not the problem.
I dont know the underlying cause, but the moment i used useACK=true together with the indexer discovery feature I after some time received lots of duplicate events. The forwarder sent sometimes the event more than 10 times. This creates false statistics and eats up the license.
This did not happen when the indexers were "hard" configured.

Since 6.5 or maybe even minor update before, I did not encounter the problem anymore.

0 Karma

tom8h
Explorer

I encountered the same problem, IndexerDiscovery and useACK=true cause the duplication. I tried all possible means but this problem was not solved so I hope to add this problem as known issue or give me the solution.

0 Karma

mathiask
Communicator

In my case the issue "vanished"

After I rolled out 6.5 and made a rolling restart of the cluster it did not show up any more.

Lucas_K
Motivator
0 Karma

craigv_splunk
Splunk Employee
Splunk Employee

I think this might be because the configuration might not be supported. Quoting from docs:

A master node cannot do double duty as a peer node or a search node. The Splunk Enterprise instance that you enable as master node must perform only that single indexer cluster role. In addition, the master cannot share a machine with a peer. Under certain limited circumstances, however, the master instance can handle a few other lightweight functions. See "Additional roles for the master node".

http://docs.splunk.com/Documentation/Splunk/6.4.1/Indexer/Enablethemasternode

0 Karma

Jeremiah
Motivator

It still seems like this would be a common sense thing to have. You've got this great solution where the peers are auto-discovered everywhere but on the server that is doing the discovery. Now, if you add or remove a peer, you still have to either update the cluster master's outputs.conf or modify a DNS entry.

0 Karma

benlc
Path Finder

I had today exactly the same problem. Indexer_discovery from the master to itself does not work.

0 Karma

breddupuis
Explorer

So ? No asnwer ?

0 Karma

Lucas_K
Motivator

Unfortunately not.

0 Karma