hi folks,
The scenario is like below
1. Got a single deployment-server
2. Got two indexer cluster each without about 20+ indexers. One exclusively for Security purposes and other indexer cluster for performance/capacity/metrics/app-support etc.
3. Need only single Splunk UF instance installed on the end-points/clients
Requirement is
1. From the same Splunk UF, security information (eg wineventlogs, secure, auth logs) needs to be sent to indexer1_group && the perfmon/metrics/application dataset etc, needs to be sent to indexer2_group. So not data cloning but specific datasets
2. Load balancing to be done only to the indexers within the same group/cluster
I'm well aware of the data cloning capability, but my above requirement is slightly different. Can this be achieved?
We already do the inputs.conf in modular fashion, so particular sourcetype/source can be sent to relevant index.
But trying to find a way to redirect specific data to relevant indexer cluster without need of Heavy forwarders
Edit: I forgot about the option mentioned by @vinod94: https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad#Route_inputs_to_s...
That would indeed be the way to go, assuming the separate data sets indeed are from separate input configurations.
If the data does not come from separate inputs, then the only option is transforms based routing and filtering, which is something for Heavy Forwarders.
So there are 3 options to solve this within Splunk:
1: Route by input, using the UF, as explained in the link above.
2: clone and drop at the indexers (if the additional network bandwidth is acceptable)
3: send to a set of intermediate heavy forwarders that perform the routing and filtering (be careful not to create a bottleneck that causes poor data distribution across your indexers).
Edit: I forgot about the option mentioned by @vinod94: https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad#Route_inputs_to_s...
That would indeed be the way to go, assuming the separate data sets indeed are from separate input configurations.
If the data does not come from separate inputs, then the only option is transforms based routing and filtering, which is something for Heavy Forwarders.
So there are 3 options to solve this within Splunk:
1: Route by input, using the UF, as explained in the link above.
2: clone and drop at the indexers (if the additional network bandwidth is acceptable)
3: send to a set of intermediate heavy forwarders that perform the routing and filtering (be careful not to create a bottleneck that causes poor data distribution across your indexers).
thanks for the different options and on the _TCP_ROUTING bit link.
In the link it says it can be done through uf ... i thought that could be a workaround! But thanks for clarifying it 🙂
You were correct. It can be done through UF as long as the separate data sets come from separate inputs. Let me clarify that a bit more in my answer. I actually missed that myself when I originally wrote my answer. Sorry for the confusion 🙂
Hi @koshyk dyude,
This link might help you!
https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad
upvoted for your help
transforms is unfortunately in Heavy forwarders (or full Splunk enterprise). not in UF
Do you have an option to create file-shares for your wineventlogs, secure, auth logs and give access to only indexer1_group? - through firewalls, secured permissions blah blah..
This way, you can directly read the logs from your indexer and no need of any forwarder.
eg: inputs.conf
[monitor:\[remote-filke-share-name[your auth log name]]]
You can't read windows event logs from file (unless you install some tool to actually read it from the API and write it to a file first, but that seems overly complicated).
You can read it remotely using WMI, but that performs / scales very poorly and is not recommended.