As per best practice we store all internal splunk logs all across the indexing tier. This gives you all the good things like dist search, replication etc to your internal splunk logs.
With the newer versions of splunk we can use indexer discovery against the indexing cluster master to find what indexers are available so that we spread the data across all available peers.
[tcpout] defaultGroup = prod disabled = false forwardedindex.filter.disable = true [tcpout:prod] indexerDiscovery = prod autoLB = true forceTimebasedAutoLB = true maxQueueSize = 100MB [indexer_discovery:prod] master_uri = https://cluster-master:8089 pass4SymmKey = blah
This works fine on every instance of splunk (search heads, forwarders, deployer, deployment server etc) except for masters.
If used on the cluster master itself it will not work.
Can it be done so that the master itself will utilise its own list of available cluster members to send its internals to?
It is possible to set up the Cluster Master using Indexer Discovery
[indexer_discovery] pass4SymmKey = <indexer discovery key>
# Turn off indexing on the master [indexAndForward] index = false # TCP output global [tcpout] defaultGroup = cluster forwardedindex.filter.disable = true indexAndForward = false # TCP output cluster group [tcpout:cert-cluster] indexerDiscovery = cluster-master forceTimebasedAutoLB = true useACK = true # indexer discovery group [indexer_discovery:cluster-master] master_uri = https://<Cluster Master IP/hosname>:8089 pass4SymmKey = <indexer discovery key>
There is one problem though, that I have with every client that uses Indexer Discovery and useACK = true. When Indexers are restarted the clients forwarding start produce duplicates until the forwarding clients are restarted... any solution welcome
This can be easily solved:
[indexer_discovery:master1] pass4SymmKey = this_is_the_secret_indexer_discovery_key master_uri = https://master_ip:8089 [tcpout:group1] autoLBFrequency = 30 forceTimebasedAutoLB = true indexerDiscovery = master1 [tcpout] defaultGroup = group1
Deploy the indexer_discovery_control everywhere, but the indexers.
Deploy the forwarders_Ack_control everywhere you want Ack to be activated.
As such, if you have multi-site cluster, create a small app for each site containing a server.conf with site definition, and deploy these apps depending on the sites of instances.
Note that defining a site is mandatory in multi-site, or the indexer discovery won't work. (not true for master node, search heads in SHC)
Hope this helps,
Setting up the config is not the problem. Splitting it into different apps etc was also not the problem.
I dont know the underlying cause, but the moment i used
useACK=true together with the indexer discovery feature I after some time received lots of duplicate events. The forwarder sent sometimes the event more than 10 times. This creates false statistics and eats up the license.
This did not happen when the indexers were "hard" configured.
Since 6.5 or maybe even minor update before, I did not encounter the problem anymore.
I encountered the same problem, IndexerDiscovery and useACK=true cause the duplication. I tried all possible means but this problem was not solved so I hope to add this problem as known issue or give me the solution.
I think this might be because the configuration might not be supported. Quoting from docs:
A master node cannot do double duty as a peer node or a search node. The Splunk Enterprise instance that you enable as master node must perform only that single indexer cluster role. In addition, the master cannot share a machine with a peer. Under certain limited circumstances, however, the master instance can handle a few other lightweight functions. See "Additional roles for the master node".
It still seems like this would be a common sense thing to have. You've got this great solution where the peers are auto-discovered everywhere but on the server that is doing the discovery. Now, if you add or remove a peer, you still have to either update the cluster master's outputs.conf or modify a DNS entry.