All Apps and Add-ons

How to choose which indexers Splunk DB Connect 2 database sources are routed to on a heavy forwarder that is currently filtering and routing other data?

brodieg
Engager

Hi,

We use a Splunk architecture where all events go through a heavy forwarder before getting to an indexer. The HF does extensive filtering, transforms (trimming), and anonymization, and is basically the 'gateway' to the indexers.

The recommendation for Splunk DB Connect 2 is to deploy it on a dedicated heavy forwarder. That aligns nicely with our existing architecture. However, it appears I can't do any routing or filtering of events loaded by DB Connect on the heavy forwarder itself.

For example, DB Connect ingests 2 different database sources: DBSource1, DBSource2. I want to route them DBSource1- > Index1 on Indexer1, and DBSource2 -> Index2 on Indexer2, but all loading from the single DB Connect app on the one heavy forwarder.

Is this possible? So far, DB Connect allows me to choose which index to put events in, but I can't choose which Indexer to send the events to. Does DBConnect/Splunk honor normal inputs.conf _TCP_ROUTING for the DB Connect app?

Thank you to anyone who has any insights!

GB

0 Karma

javiergn
Super Champion

What about deploying a dedicated HF for DBConnect and then forward that oo your "main" HF that does all the filtering, trimming, etc?
Basically, treat that DBConnect HF in the same way as you are probably doing with your Universal Forwarders at the moment

0 Karma

renjith_nair
Legend

Not sure about DB connect but you can do conditional routing. For eg:

props.conf

[dbsource1>
TRANSFORMS-routing=route_to_indexer1

tranforms.conf

[route_to_indexer1]
REGEX=.
DEST_KEY=_TCP_ROUTING
FORMAT=target_group1

outputs.conf

[tcpout:target_group1]
server=indexer1:9997

The above is just a skeleton. You might need to adjust to your requirement.

Detailed information is available here

http://docs.splunk.com/Documentation/Splunk/6.2.0/Forwarding/Routeandfilterdatad

---
What goes around comes around. If it helps, hit it with Karma 🙂

brodieg
Engager

Thanks Renjith. Yes we use this configuration extensivly in the forwarder to manipulate if/where events from various upstream sources are handled. The issue is that with DB connect on the same forwarder, the above concepts don't seem to apply.

The [default] in props.conf does have an effect: whatever routing I put under [defaul] is honored for dbconnect events, so it is engaging somewhere, but I was hoping to have props.conf act on something like [sourcetype::dbconnect_userlogs_from_mssql] -> route to indexer1, [sourcetype::dbconnect_apilogs_from_oracle] -> route to indexer 2.

0 Karma

renjith_nair
Legend

If default has an effect, then your local conf also should work. In between while you mention sourcetype in header it should be just sourcetype name instead of sourcetype::

From http://docs.splunk.com/Documentation/Splunk/6.2.0/Forwarding/Routeandfilterdatad,

     <spec> can be:
        <sourcetype>, the source type of an event
        host::<host>, where <host> is the host for an event
        source::<source>, where <source> is the source for an event
    If you have multiple TRANSFORMS attributes, use a unique name for each. For example: "TRANSFORMS-routing1", "TRANSFORMS-routing2", and so on.
    <transforms_stanza_name> must be unique. 
---
What goes around comes around. If it helps, hit it with Karma 🙂
0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Index This | What travels the world but is also stuck in place?

April 2026 Edition  Hayyy Splunk Education Enthusiasts and the Eternally Curious!   We’re back with this ...

Discover New Use Cases: Unlock Greater Value from Your Existing Splunk Data

Realizing the full potential of your Splunk investment requires more than just understanding current usage; it ...

Continue Your Journey: Join Session 2 of the Data Management and Federation Bootcamp ...

As data volumes continue to grow and environments become more distributed, managing and optimizing data ...