Getting Data In

Routing of events located at Indexers / Search Heads to another Cluster and vice-versa

FlorianScho
Path Finder

 Hi Community!

i have a (kind of ) special problem with my data routing.

Topology:
We have 2 different Clusters, one for ES and one for Splunk Enterprise.
Each clusters consist of minimum 1 Search head 4 Indexer peers (Multisite Cluster).
All hosted on RedHat Virtual Machines.

Usecase:
On all Linux systems (including Splunk itself) are some sources defined for ES and some sources for normal Splunk Enterprise indexes.
E.g.:
/var/log/secure - ES (Index: linux_security)
/var/log/audit/audit.log - ES (Index: linux_security)
/var/log/dnf.log - Splunk Enterprise (Index: linux_server)
/var/log/bali/rebootreq.log - Splunk Enterprise (Index: linux_server)

Problem:
The Routing of those logs from the collecting tier (Universal Forwarder, Heavy Forwarder) is fine, because those components have both clusters as output groups defined including props / transforms config. 
On Search heads there are only the search peers defined as output group (ES Search head --> ES Indexer Cluster, Splunk Enterprise Search head --> Splunk Enterprise Cluster).
This is due to several summary searches and inputs from the Search head, im not able to adjust the routing like we do on the Heavy Forwarder because of the frequent changes made my powerusers. That is working fine so far except for the sources that require to be sent to the opposite cluster.
Same for the logs directly on the Indexer Tier, the defined logs requires to get sent to the other cluster.

So simplified:
The log /var/log/secure on Splunk Enterprise Cluster Search head / Indexer needs to be sent to ES Cluster Indexer.
The log /var/log/dnf.log on the ES Cluster Search head / Indexer needs to be sent to the Splunk Enterprise Indexer.


What i have done already:
Configured both Indexer Clusters to sent data to each other based on the specific index in outputs.conf.
With this the events are now available in the correct cluster, but are also available as duplicates in their source cluster. I try to get rid of the source events!

Splunk Enterprise Indexer outputs.conf:

[indexAndForward]
index = true
[tcpout]
...
forwardedindex.0.whitelist = .*
forwardedindex.1.blacklist = _.*
forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry|_metrics|_metrics_rollup|_configtracker|_dsclient|_dsphonehome|_dsappevent)
forwardedindex.3.blacklist = .*
forwardedindex.4.whitelist = linux_secure
forwardedindex.5.blacklist = _.*
forwardedindex.filter.disable = false
useACK = false
useClientSSLCompression = true
useSSL = true
[tcpout:es_cluster]
server = LINUXSPLIXPRD50.roseninspection.net:9993, LINUXSPLIXPRD51.roseninspection.net:9993, LINUXSPLIXPRD52.roseninspection.net:9993,LINUXSPLIXPRD53.roseninspection.net:9993


ES Indexer outputs.conf:

[indexAndForward]
index = true
[tcpout]
...
forwardedindex.0.whitelist = .*
forwardedindex.1.blacklist = _.*
forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry|_metrics|_metrics_rollup|_configtracker|_dsclient|_dsphonehome|_dsappevent)
forwardedindex.3.blacklist = .*
forwardedindex.4.whitelist = linux_server
forwardedindex.5.blacklist = _.*
forwardedindex.filter.disable = false
useACK = false
useClientSSLCompression = true
useSSL = true
[tcpout:rosen_cluster]
server = LINUXSPLIXPRD01.roseninspection.net:9993, LINUXSPLIXPRD02.roseninspection.net:9993, LINUXSPLIXPRD03.roseninspection.net:9993,LINUXSPLIXPRD04.roseninspection.net:9993



Additionally i tried to setup props.conf / transforms.conf like we do on HF to catch at least events from Search head and send them to the correct _TCP_ROUTING queue but without any success. I guess because they got parsed already on the Search head.

Splunk Enterprise props.conf:

[linux_secure]
...
SHOULD_LINEMERGE = False
TIME_FORMAT = %b %d %H:%M:%S
TRANSFORMS =
TRANSFORMS-routingLinuxSecure = default_es_cluster



Splunk Enterprise transforms.conf:

[default_es_cluster]
...
DEST_KEY = _TCP_ROUTING
FORMAT = es_cluster
REGEX = .
SOURCE_KEY = _raw




ES props.conf:

[rhel_dnf_log]
...
SHOULD_LINEMERGE = True
TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%Q
TRANSFORMS-routingLinuxDNF = default_rosen_cluster


ES transforms.conf:

[default_rosen_cluster]
...
DEST_KEY = _TCP_ROUTING
FORMAT = rosen_cluster
REGEX = .
SOURCE_KEY = _raw



Example:
Source: /var/log/dnf.log

_time_rawhostsourceindexsplunk_servercount
2024-09-10 12:07:212024-09-10T12:07:21+0000 DDEBUG timer: config: 3 mslinuxsplixprd51.roseninspection.net
(Indexer ES)
/var/log/dnf.log
last_chance
linux_server
linuxsplixprd01.roseninspection.net
linuxsplixprd51.roseninspection.net
2
2024-09-11 12:24:312024-09-11T10:24:31+0000 DDEBUG timer: config: 4 mslinuxsplixprd01.roseninspection.net
(Indexer Splunk Enterprise)
/var/log/dnf.loglinux_serverlinuxsplixprd01.roseninspection.net1
2024-09-10 13:15:042024-09-10T11:15:04+0000 DDEBUG timer: config: 3 ms

linuxsplshprd50.roseninspection.net
(Search head ES)

/var/log/dnf.log
last_chance
linux_server
linuxsplixprd01.roseninspection.net
linuxsplixprd50.roseninspection.net
2
2024-09-10 13:22:532024-09-10T11:22:53+0000 DDEBUG Base command: makecachelinuxsplshprd01.roseninspection.net 
(Search head Splunk Enterprise)
/var/log/dnf.loglinux_serverlinuxsplixprd01.roseninspection.net1
2024-09-11 11:55:512024-09-11T09:55:51+0000 DEBUG cachedir: /var/cache/dnfkuluxsplhfprd01.roseninspection.net
(Heavy Forwarder)
/var/log/dnf.loglinux_serverlinuxsplixprd01.roseninspection.net1



Any idea how i can achieve to get rid of those duplicate events at the source cluster (last_chance)?

0 Karma

PickleRick
SplunkTrust
SplunkTrust

To be fully honest, this setup seems a bit overcomplicated. I've seen setups with a single indexer cluster and multiple SHCs performing different tasks connecting to it but multiple separate environments and events still sent between them... that's a bit weird. But hey, it's your environment 😉

Actually, since you want to do some strange stuff with OS-level logs, it might be that one unique use case when it makes sense to install a UF alongside a normal Splunk Enterprise installation. That might be an easiest and least confusing solution.

 

FlorianScho
Path Finder

Hi,

yes, I know the setup might look a bit overengineered, but it best fits our needs as we need to “logically” separate the ES data from other Splunk use cases.

Anyway, I wasn't aware that I can run a Universal Forwarder together with another Splunk Enterprise Component. Is this supported or is it at least officially documented somewhere?

 

0 Karma

PickleRick
SplunkTrust
SplunkTrust

No. As far as I know it's neither officially supported nor (well) documented. And at least up to not so long ago you couldn't install both components from a RPM or DEB package because they were installed in the same place (/opt/splunk). More recent versions install in separate directories (/opt/splunk vs. /opt/splunkforwarder) so it might be possible to install both from packages (I haven't tried this myself though so I'd strongly advise to test in lab first).

FlorianScho
Path Finder

Understood! I appreciate your answers. 
I will keep this post unresolved for now and test it.

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...