Getting Data In

How to filter events on a heavy forwarder sent from universal forwarders?

Path Finder

Hi Team,

We want to drop events which contain the keyword "error"

Below is our setup:
universal forwarder ------>Heavy weight forwarder -------->indexer/cloud

We have multiple universal forwarders which are sending logs directly to indexers. We want to filter these logs via heavy weight forwarders, so we are sending logs from the universal forwarders to a heavy weight forwarder.

Can filtering be achieved by our setup?

Below are the configs we created for filtering events, but it's not working:

My props.conf on heavy weight forwarder:

[sourcetypename]
TRANSFORMS-set= setnull,setparsing

transforms.conf on heavy weight forwarder:

[setnull]
REGEX =error
DEST_KEY = queue
FORMAT = nullQueue

[setparsing]
REGEX = .
DEST_KEY = queue
FORMAT = indexQueue

Am I missing something?
Do I need to mention something like tcp_routing etc as logs are forwarded by the universal forwarder to heavy weight forwarder?

Please advise

0 Karma
1 Solution

SplunkTrust
SplunkTrust

Hi thezero,

Does the regex match anything in your events?
Change the class name of the TRANSFORMS- in props.conf this must be a unique name:

* <class> is a unique literal string that identifies the namespace of the  field you're extracting.

BTW the option indexANd forward=True in outputs.conf is wrong and should be indexAndForward = true But do you really want to index events on the heavy weight forwarder?

Hope this helps ...

cheers, MuS

View solution in original post

0 Karma

SplunkTrust
SplunkTrust

We've discussed several solutions to several questions now. Can you restate your actual question? It should be something like the examples below:

A. I cant get events from uf filtered at hf on the way to splunk STORM indexers. It aint working and here are my configs on my UF and HF.... please help me make it work. !?!?

-or-

B. In theory will I be able to filter events using a uf -> hf -> storm indexers architecture?

Esteemed Legend

There is no need to complicate your configuration by adding Heavy Forwarders. Go back to UFs-only and put this on your Indexers:

props.conf:

[sourcetypename]
TRANSFORMS-set= drop_errors

transforms.conf:

[drop_errors]
REGEX = error
DEST_KEY = queue
FORMAT = nullQueue

Then restart all splunk instances on the Indexers and newly-indexed events will be correctly filtered.

0 Karma

Communicator

Hi, Woodcock.

If you send events with "error" to the nullQueue, do you know if these events will/will not count toward your daily license?

0 Karma

SplunkTrust
SplunkTrust

Events sent to the nullQueue will not count towards the license usage.

cheers, MuS

SplunkTrust
SplunkTrust

Hi thezero,

Does the regex match anything in your events?
Change the class name of the TRANSFORMS- in props.conf this must be a unique name:

* <class> is a unique literal string that identifies the namespace of the  field you're extracting.

BTW the option indexANd forward=True in outputs.conf is wrong and should be indexAndForward = true But do you really want to index events on the heavy weight forwarder?

Hope this helps ...

cheers, MuS

View solution in original post

0 Karma

Path Finder

Hi woodcock/Mus,

Thank for your suggestion.Actually we are using cloud and do not have access to indexer on cloud.There are 100+ universal forwarders so trying to config filter on heavy weight forwarder to avoid updating configs on 100= universal forwarders.
when I add indexAndForward = true filtering works.But i dont want to index events on HWF.when I make indexAndForward = false my filtering do not work.Any suggestions?

Also adding/managing filters on a single heavy weight forwarder will be easy instaed of updating 100+ UF's? Please comment.

0 Karma

SplunkTrust
SplunkTrust

Hi thezero,

Using the heavy weight forward is fine in your use case, because you cannot access the indexer. To trouble shoot your problem I would suggest to do the following:

1 - Remove all configuration settings related to filtering and forwarding on the HWF and restart Splunk
2 - Setup a fresh forwarding config on the HWF either by configuring outputs.conf or by using the UI and restart Splunk
3 - Verify you get the un-filtered events on the indexer; if not goto 2
4 - Add just one setting in props.conf and transforms.conf on the HWF to forward the events and restart Splunk:
props.conf

[sourcetype]
TRANSFORMS-mySourceTypeFilteringAndForwarding = MySourceTypeForwarding

transforms.conf

 [MySourceTypeForwarding]
 REGEX = .
 DEST_KEY = queue
 FORMAT = indexQueue

5 - Verify you still get the events on the indexer; if not goto 4 and fix the error
6 - Add the second setting in props.conf and transforms.conf on the HWF to filter the events and restart Splunk:
props.conf

[sourcetype]
TRANSFORMS-mySourceTypeFiltering = MySourceTypeFiltering, MySourceTypeSetParsing

transforms.conf

 [MySourceTypeFiltering]
 REGEX = error
 DEST_KEY = queue
 FORMAT = nullQueue

7 - Verify you still get the events on the indexer and they are filtered; if not goto 6 and fix the error
8 - If everything falls apart; start at 1

Use the btool commands provided by @jkat54 to troubleshoot and remember to restart Splunk on the HWF after each change of props.conf and transforms.conf

Hope that helps ...

cheers, MuS

0 Karma

SplunkTrust
SplunkTrust

Please let us see your inputs.conf, and any transforms.confs as well.

Hunch is you have misnamed sourcetypes. That and I'd start with easy regexes and build from there...

so start with

 [setnull]
 REGEX = POST
 DEST_KEY = queue
 FORMAT = nullQueue 

then add your OR regexes once you prove POST filtering is working.

Finally, you may want to run these debug commands to be sure a local copy of the .conf files arent causing the issue:

 ./splunk cmd btool props list --debug
 ./splunk cmd btool transforms list --debug
 ./splunk cmd btool inputs list --debug

You'll want to run this on every host in the "funnel"

0 Karma

Path Finder

Hi,

My inputs.conf below:

[splunktcp://9997]
disabled = 0

and my outputs.conf is as below:

[tcpout]
defaultGroup=indexer1

[tcpout:indexer1]
server=10.1.1.197:9997

indexANd forward=True

Please help

0 Karma

SplunkTrust
SplunkTrust

I meant the inputs.conf on the device that is "finding" the data. Looks like the UF in your example. ... HOWEVER... now I SEE you are asking if this can be done with your setup.

The answer is yes. You can filter events using this setup.

Also the answer is as Woodcock states in his answer.

0 Karma

SplunkTrust
SplunkTrust

http://docs.splunk.com/Documentation/Splunk/6.2.0/Forwarding/Routeandfilterdatad

The TCP_routing is explained in the link above and it doesnt appear to be required in your case.

These settings you gave for outputs.conf would be fine:

 [tcpout]
 defaultGroup=indexer1

 [tcpout:indexer1]
 server=10.1.1.197:9997

No need to index and forward unless you want a copy local to the heavy forwarder.

0 Karma