Getting Data In

How to route data from a single input to multiple indexes?

jrobinson3661
Engager

I am using a distributed Splunk Enterprise configuration with syslog data from multiple sources going to a central syslog server with a Universal Forwarder. The syslog sources are from separate customer servers and I need to maintain separation of data. The syslog data from the sources are going to individual log files, and I have log files forwarded to an Indexer such that the log sources belonging to the same customer are forwarded to the same Index so that I can use role based access controls per index. This works great. However, I now need to ingest log data from a load balancer that sits in front of all the customer servers such that only the logs for the customer's virtual IP only go to the customer's index. I have the syslog data from the load balancer going to the same syslog server with the same Universal Forwarder. I can't get the Transforms to properly parse and send the virtual IP events to the proper index, and I've tried all the example configurations I could find. This is what I currently have for the Props.conf and Transforms.conf on the Indexer.

Props.conf

[source::<same source as used in the inputs.conf file on the Universal Forwarder>]
TRANSFORMS-viprouting = customer 1, customer2, customer3

Transforms.conf

[customer1]
REGEX = "10.1.1.1"
DEST_KEY = _MetaData:Index
Format = customer1Index
WRITE_META = true

[customer2]
REGEX = "10.1.1.2"
DEST_KEY = _MetaData:Index
Format = customer2Index
WRITE_META = true

[customer3]
REGEX = "10.1.1.3"
DEST_KEY = _MetaData:Index
Format = customer3Index
WRITE_META = true
1 Solution

masonmorales
Influencer

You should have this configuration on a Splunk Heavy Forwarder. It will not work on a Universal Forwarder, so if you're using a UF on your syslog server, you will have to upgrade it to the full version of Splunk for it to act as a Heavy Forwarder. This is because a Universal Forwarder cannot parse data at the event level, which is what we are doing with this configuration. Alternatively, you could try this config at the indexer, but it's generally simpler to move it out to the HF.

There's a space in your props.conf. If that's a typo, it could cause problems as well. Here's what I think you should have for the parsing config on your syslog server's heavy forwarder:

Props.conf

 [source::<same source as used in the inputs.conf file on the Universal Forwarder>]
 TRANSFORMS-viprouting = customer1, customer2, customer3

Transforms.conf

 [customer1]
 REGEX = 10\.1\.1\.1
 FORMAT = customer1Index
 DEST_KEY = _MetaData:Index

 [customer2]
 REGEX = 10\.1\.1\.2
 FORMAT = customer2Index
 DEST_KEY = _MetaData:Index

 [customer3]
 REGEX = 10\.1\.1\.3
 FORMAT = customer3Index
 DEST_KEY = _MetaData:Index

Make sure you restart Splunk after making the configuration change for it to take effect.

View solution in original post

masonmorales
Influencer

You should have this configuration on a Splunk Heavy Forwarder. It will not work on a Universal Forwarder, so if you're using a UF on your syslog server, you will have to upgrade it to the full version of Splunk for it to act as a Heavy Forwarder. This is because a Universal Forwarder cannot parse data at the event level, which is what we are doing with this configuration. Alternatively, you could try this config at the indexer, but it's generally simpler to move it out to the HF.

There's a space in your props.conf. If that's a typo, it could cause problems as well. Here's what I think you should have for the parsing config on your syslog server's heavy forwarder:

Props.conf

 [source::<same source as used in the inputs.conf file on the Universal Forwarder>]
 TRANSFORMS-viprouting = customer1, customer2, customer3

Transforms.conf

 [customer1]
 REGEX = 10\.1\.1\.1
 FORMAT = customer1Index
 DEST_KEY = _MetaData:Index

 [customer2]
 REGEX = 10\.1\.1\.2
 FORMAT = customer2Index
 DEST_KEY = _MetaData:Index

 [customer3]
 REGEX = 10\.1\.1\.3
 FORMAT = customer3Index
 DEST_KEY = _MetaData:Index

Make sure you restart Splunk after making the configuration change for it to take effect.

jrobinson3661
Engager

Yep, that's exactly what I got to work, except that I kept that I kept the WRITE_META = true statement in the transform.conf file. It's working now, so far. Appreciate all the input.

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...