Getting Data In

How do I filter events into 2 environments?

chrishatfield21
Path Finder

I have an old environment (5.0) and new environment (6.2.1). I have heavy forwarders in the new environment collecting the data and forwarding to both environments. I have to keep some of the data flowing to the old environment, but I can cut off most of it to save on my license if possible. I have tried to drop the events on the old indexers, but it is not working and I think it is because it is already went through the queues on the forwarders, so it skips them on the indexers. See the "Caveats for routing and filtering structured data" http://docs.splunk.com/Documentation/Splunk/6.2.1/Forwarding/Routeandfilterdatad here.

Below is my setup. Any thoughts on how I can accomplish this one?

Heavy Forwarders: outputs.conf

[tcpout]
defaultGroup - prod, new
forwardedindex.filter.disable = true

[tcpout:prod]
server = server1:9997,server2:9997
autoLB = true

[tcpout:new]
server = server3:9997,server4:9997
autoLB = true

I have tried this on the indexer (server1) with no such luck. I have also tried to place this in the /etc/system/local directory and tried to use the source instead of the sourcetype. I have restarted splunk but still no luck.

props.conf
[cisco:asa]
TRANSFORMS-set = drop_event

transforms.conf
[drop_event]
REGEX = .
DEST_KEY = queue
FORMAT = nullQueue

Any help is much appreciated.

0 Karma
1 Solution

somesoni2
Revered Legend

I believe you would be able to achieve the same using this method http://docs.splunk.com/Documentation/Splunk/6.2.1/Forwarding/Routeandfilterdatad#Replicate_a_subset_...

Basically, have different tcpout stanza/name for your two environment, configure props/transform in you Heavy forwarder (that's where the event processing is happening) and change the _TCP_ROUTING accordingly. Actually the example in the link is exactly what you need.

View solution in original post

jkat54
SplunkTrust
SplunkTrust

Did you mean to have two [tcpout:prod] in your outputs.conf example? I think one is supposed to be :new instead.

0 Karma

chrishatfield21
Path Finder

Yes the 2nd stanza is supposed to be :new. That is a typo on my part. Thanks for spotting that one.

0 Karma

jkat54
SplunkTrust
SplunkTrust

Your regex should be .* no?

transforms.conf
[drop_event]
REGEX = .*
DEST_KEY = queue
FORMAT = nullQueue

How about putting the props and transforms on the heavy forwarder instead of the indexers? Heavy forwarders act a bit differently from universal forwarders.

0 Karma

chrishatfield21
Path Finder

I need all the data to go to the new environment and if I drop the events on the heavy forwarders then it will drop it from both environments. As for the regex .* it would work the same as . in this case. I am just matching any character.

0 Karma

jkat54
SplunkTrust
SplunkTrust

I see your point now about putting it on the heavy forwarders.

Still i believe you want to use .* otherwise you're only matching events with 1 character. I'm not a regex expert, but would be interested in the result.

Anyways... now that I know the issue... @somesoni2 has the correct answer.

0 Karma

somesoni2
Revered Legend

I believe you would be able to achieve the same using this method http://docs.splunk.com/Documentation/Splunk/6.2.1/Forwarding/Routeandfilterdatad#Replicate_a_subset_...

Basically, have different tcpout stanza/name for your two environment, configure props/transform in you Heavy forwarder (that's where the event processing is happening) and change the _TCP_ROUTING accordingly. Actually the example in the link is exactly what you need.

chrishatfield21
Path Finder

So I tried this configuration and it will not work. It does in fact route the data as expected but it does not send the data to the new and old. It is either one or the other. I tried running it through 2 separate transforms as well to see if it would route to both and it would not. It routed to the last stanza in the list.

0 Karma

chrishatfield21
Path Finder

Okay so my initial testing I think I have this working now. What I am doing is keeping the same config I have above where I am sending this to both environments. Then using the _TCP_ROUTING I am routing the sourcetypes I don't want going into my old environment back to my new. I will continue to verify this is in fact working and mark this as the accepted answer once I am sure.

Here is what I currently have configured on my forwarder:

outputs.conf

 [tcpout]
 defaultGroup - prod, new
 forwardedindex.filter.disable = true

 [tcpout:prod]
 server = server1:9997,server2:9997
 autoLB = true

 [tcpout:new]
 server = server3:9997,server4:9997
 autoLB = true

props.conf

[sourcetype]
TRANSFORMS-route = route_to_new_env

transforms.conf

[route_to_new_env]
REGEX = .
DEST_KEY = _TCP_ROUTING
FORMAT = new
0 Karma

somesoni2
Revered Legend

This does involve some work but should give you 100% control on what should go where.

0 Karma

chrishatfield21
Path Finder

After doing some reading I also came across this in the documentation http://docs.splunk.com/Documentation/Splunk/6.2.1/Forwarding/Forwarddatatothird-partysystemsd. Under the "Forward a subset of data" section is describes what you are talking about only it forwards all the data to the first source and just the sourcetypes specified to the second source.

Does this in fact forward ALL data to the first? I hope that is the case so that my new environment gets all and the old just gets a few sourcetypes specified.

0 Karma

jkat54
SplunkTrust
SplunkTrust

In that section you've referenced, they're talking about using transforms.conf to forward events that match a regex.

In the example I give, you're using inputs.conf to forward every event of that stanza's type to specific indexers.

inputs.conf
[data source we want to send to old & new indexers]
disabled = false
sourcetype = sourcetype
index = index
_TCP_ROUTING = all_indexers

[data source we only want to send to OLD Indexers]
disabled = false
sourcetype= sourcetype
index = index
_TCP_ROUTING = old_indexers

[data source we only want to send to New Indexers]
disabled = false
sourcetype= sourcetype
index = index
_TCP_ROUTING = new_indexers

FINALLY

outputs.conf to tie it all together:
[tcpout:all_Indexers]
server = indexer1, indexer2, indexer3, indexer4

[tcpout:old_Indexers]
server = indexer1, indexer2

[tcpout:new_Indexers]
server = indexer3, indexer4

0 Karma

jkat54
SplunkTrust
SplunkTrust

ideally you'll add ssl compression to save bandwidth at the cost of ssl encryption being handled at the CPU level on the indexers & forwarders.

compressed=true and specify an ssl cert for each indexer as well:

[tcpout:OLD_INDEXERS]
server = 10.0.0.1:8089, 10.0.0.2:8089
compressed = true

[tcpout-server://10.0.0.1:8089]
sslCertPath = $SPLUNK_HOME/etc/apps/custom_tcpout/certs/cert.pem
sslCommonNameToCheck = FQDN of cert (ex. mysplunk.mycompany.com)
sslPassword = password for cert
sslRootCAPath = $SPLUNK_HOME/etc/apps/custom_tcpout/certs/cacert.pem
sslVerifyServerCert = true

[tcpout-server://10.0.0.2:8089]
sslCertPath = $SPLUNK_HOME/etc/apps/custom_tcpout/certs/cert.pem
sslCommonNameToCheck = FQDN of cert (ex. mysplunk.mycompany.com)
sslPassword = password for cert
sslRootCAPath = $SPLUNK_HOME/etc/apps/custom_tcpout/certs/cacert.pem
sslVerifyServerCert = true

each indexer can have it's own unique ssl cert if you like. The example above shows them using the same cert.

0 Karma

jkat54
SplunkTrust
SplunkTrust

I'll expand here with some examples:

Inputs.conf
[batch:///opt/*.xml]
move_policy = sinkhole
disabled = false
sourcetype = cisco:asa
index = oldCiscoIndexes
_TCP_ROUTING = old_Indexers_only

outputs.conf
[tcpout:old_Indexers_only]
server = xxx.yyy.zzz.aaa:pppp

0 Karma

somesoni2
Revered Legend

Do you have inputs.conf defined on the Heavy Forwarder OR it's collecting data from other Universal forwarder to send to indexers (working as Intermediate forwarder)?

0 Karma

chrishatfield21
Path Finder

They are doing both. The main part would be the UF data but I would also like to cut back some of the syslog data that resides on the heavy forwarders as well.

0 Karma
Get Updates on the Splunk Community!

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...