Deployment Architecture

Multiple Load Balanced Heavy Forwarders (Configuration with non-clustered Indexers)

pattokt
Explorer

What we are trying to do?

Multiple Universal forwarders with different indexes forwarding data to a load balanced address. This load balanced address has two pool members (Heavy Forwarders). These Heavy Forwarders receive and forward the data onto the indexers/3rd party systems. These indexers are not clustered to save on storage and only certain indexes are hard configured to specific indexers. I realize we could use a replication factor of one.

The question?

How do the Heavy Forwarders know where to send indexes if specific indexes are only configure for specific indexers.?

Diagram

Universal Forwarder with index=test1 ------> splunktest.com (pool members: heavyforwarder1 && heavyforwarder2)
Universal Forwarder with index=test2 ------> splunktest.com (pool members: heavyforwarder1 && heavyforwarder2)

test1 is configured only on indexer1 && test2 is configured only on indexer2

What keeps the heavy forwarders from sending index=test1 data to indexer2?
Do we just need to use indexer clustering after all?

Thank You for your time

Tags (1)
0 Karma
1 Solution

renjith_nair
Legend

You can control the data flow in heavy forwarder's outputs conf by using "Route and filter data" method. By using it, you will be able to re-redirect the incoming events to respective indexers.

Details are available in Route and filter data

---
What goes around comes around. If it helps, hit it with Karma 🙂

View solution in original post

renjith_nair
Legend

You can control the data flow in heavy forwarder's outputs conf by using "Route and filter data" method. By using it, you will be able to re-redirect the incoming events to respective indexers.

Details are available in Route and filter data

---
What goes around comes around. If it helps, hit it with Karma 🙂

pattokt
Explorer

Also, I have testing the below config multiple times. It will only route data to the first indexer named in outputs.conf. I don't think you can have multiple instances of DEST_KEY=_TCP_ROUTING.

props.conf
[host::*]
TRANSFORMS-routing=indexer1

[host::*]
TRANSFORMS-routing=indexer2

transforms.conf
[indexer2]
REGEX=.
DEST_KEY=_TCP_ROUTING
FORMAT=indexer2_group

[indexer1]
REGEX=.
DEST_KEY=_TCP_ROUTING
FORMAT=indexer1_group

outputs.conf

[tcpout:indexer2_group]
server=lindexprod1.com:10000

[tcpout:indexer1_group]
server=oindexprod1.com:10000
0 Karma

renjith_nair
Legend

From your above configuration, since both has host*, it will forward to the first indexer.

You can have multiple instances of DEST_KEY=_TCP_ROUTING at least as per the documentation in http://docs.splunk.com/Documentation/Splunk/6.0/Forwarding/Routeandfilterdatad under the heading
Filter and route event data to target groups

From your original question, lets take forwarder1,forwarder2 be the two forwarders which send data to indexer1 and indexer2 respectively.
In this case, the following should work,

props.conf
[host::forwarder1]
 TRANSFORMS-routing=indexer1

[host::forwarder2]
 TRANSFORMS-routing=indexer2

transforms.conf
[indexer1]
 REGEX=.
 DEST_KEY=_TCP_ROUTING
 FORMAT=indexer1_group

[indexer2]
 REGEX=.
 DEST_KEY=_TCP_ROUTING
 FORMAT=indexer2_group

 outputs.conf
[tcpout:indexer1_group]
 server=indexer1:9997

[tcpout:indexer2_group]
 server=indexer2:9997

If you have separate source or sourcetypes from each of these forwarders, then you can mention it in props instead of hostname
Eg:

[source::<sourcename>]
 TRANSFORMS-routing=indexer1

hostname and sourcename can be replaced by a valid regex as well

---
What goes around comes around. If it helps, hit it with Karma 🙂
0 Karma

renjith_nair
Legend

Somehow link is not visible. see here http://docs.splunk.com/Documentation/Splunk/6.2.0/Forwarding/Routeandfilterdatad

---
What goes around comes around. If it helps, hit it with Karma 🙂
0 Karma

pattokt
Explorer

I have tried the settings below on the heavy forwarders with no luck. I want to blacklist the test index from going to indexer number one. It will blacklist when I apply it at the global tcpout level, but I won't get data into indexer2 then.

[tcpout]
defaultGroup = default-group

[tcpout:default-group]
disabled = false
server = indexer1.com:10000,indexer2.com:10000

[tcpout-server://indexer1.com:10000]
forwardedindex.0.whitelist =
forwardedindex.1.blacklist =
forwardedindex.2.whitelist =
forwardedindex.0.blacklist = test

0 Karma

renjith_nair
Legend

Try the following based on the forwarder. You can refine this according to your requirement.
props.conf

[host::forwarder1]
TRANSFORMS-routing=indexer1

[host::forwarder2]
TRANSFORMS-routing=indexer2

transforms.conf

[indexer1]
REGEX=.
DEST_KEY=_TCP_ROUTING
FORMAT=indexer1_group

[indexer2]
REGEX=.
DEST_KEY=_TCP_ROUTING
FORMAT=indexer2_group

outputs.conf

[tcpout:indexer1_group]
server=indexer1:9997

[tcpout:indexer2_group]
server=indexer2:9997

please test it in non-prod environment before changing it in production

---
What goes around comes around. If it helps, hit it with Karma 🙂
0 Karma

pattokt
Explorer

The below config seems to be blocking everything to the (indexer1) instead of just the specified host to indexer1.

Configured below on heavy forwarder.

props.conf

[host::hostname.com]
TRANSFORMS-routing=lside

[host::*]
TRANSFORMS-routing=oside

transforms.conf

[lside]
REGEX= .
DEST_KEY= queue
FORMAT= nullQueue

[oside]
REGEX= .
DEST_KEY= _TCP_ROUTING
FORMAT= oside1_group

outputs.conf

[tcpout:lside1_group]
server=indexer1.com:10000

[tcpout:oside1_group]
server=indexer2.com:10000
0 Karma
Get Updates on the Splunk Community!

Index This | Divide 100 by half. What do you get?

November 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...

Stay Connected: Your Guide to December Tech Talks, Office Hours, and Webinars!

❄️ Celebrate the season with our December lineup of Community Office Hours, Tech Talks, and Webinars! ...

Splunk and Fraud

Watch Now!Watch an insightful webinar where we delve into the innovative approaches to solving fraud using the ...