For about a week, two of our indexers were not replicating to their slaves - oddly this reduced our license usage by half(p2) and we did not really see any events missing. Once the replication of the two indexes was re-enabled - the usage doubled to our usual levels.
I've raised a case with Splunk support, but I can't really get a straight answer. Essentially, what I would like to know is if we are penalized for every replica?
Heavy Forwarders and Universal forwarders are configured to forward events to both indexers so in theory, if an index is not replicating, we should still ingest logs. [we have two indexers and two large indexes: network and security, so one indexer will hold the primary copy of the index and the other the slave copy]
I don't know why you cannot get a straight answer, because it is pretty straight forward:
_rawdata that hits any indexer will count against your license.
See the docs for more details http://docs.splunk.com/Documentation/Splunk/6.5.1/Admin/TypesofSplunklicenses#Licenses_for_indexer_c...
This is important to you:
Only incoming data counts against the license; replicated data does not.
That said, check if
Hope this helps ...
Thank you for your comments.
The indexes.conf on the master(C:\Program Files\Splunk\etc\master-apps_cluster\local) sets "repFactor=auto", which should result in the index's data to be replicated to other peers in the cluster, and I've not come across any other settings.
As far as the outputs.conf is concerned on the universal forwarders - the way I understand this is that we are not cloning data, but merely balancing between the two indexers:
[tcpout:indexers] disabled=false autoLBFrequency=40 server=10.250.156.60:9997,10.250.156.61:9997 useACK=true
Similarly the following is the outputs.conf from a heavy forwarder that handles syslog traffic. Here however I don't see useACK - does that mean that if index replica was disabled = some events will not have been indexed and therefore lost?
[tcpout] defaultGroup = default-autolb-group [tcpout-server://10.250.156.60:9997] [tcpout:default-autolb-group] disabled = false server = 10.250.156.60:9997,10.250.156.61:9997 [tcpout-server://10.250.156.61:9997]
outputs.conf of the HWF is cloning the events to
10.250.156.61 and at the same time using them in a load balanced config......
Just use it like this :
[tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] disabled = false autoLBFrequency = 30 server = 10.250.156.60:9997,10.250.156.61:9997
I've changed the file as suggested, but having restarted Splunk on that heavy forwarder I don't see any event count decrease. I run "* source="udp:514" index=network | timechart count" which are ingested via that heavy forwarder...
This is old - but: do you still think that the shown
outputs.conf on th HFW is cloning events? If so, why?
To my knowledge a
[tcpout-server:<ipaddr>] stanza could be used for server specific configuration. But that's only in addition to the settings done in a server group, i.e. something like
This was just something I observed - it could have been a reason why things go weird.
_internalas long as you have it for the top sending sourcetype or host and try to figure out where the data originates and what path it takes into the indexers.
conffiles using the
_internalfor any information related to bucket replication.
This is not everything, but gives you a starting point...