Getting Data In

How to blackhole unwanted server logs by configuring props.conf and transforms.conf?

Explorer

Our main syslog server just forwards everything to Splunk. We have exclusions in syslog for certain applications but we would still like to clean out anything not vital to Splunk. I've attempted to set up the props.conf and transforms.conf appropriately but it doesn't seem to work properly. I moved them to /opt/splunk/etc/system/local instead of editing the default files.

props.conf
[source::udp:514]
TRANSFORMS-drop_hosts = drop_hosts

transforms.conf   
[drop_hosts]
SOURCE_KEY = Metadata:Host
REGEX = 192.168.158.131.log
DEST_KEY = queue
FORMAT = nullQueue

I am just testing it with one right now. But when I pull up the Data Summary and look at the host count for that IP it continues to rise.

0 Karma
1 Solution

Revered Legend

It will.. My bad, I didn't realize it's a TCP input directly coming on Indexers, so configurations are created in correct place. Now we should check if the entries are correct OR not. What is the actual Host name that you see in the log entries? Is it really 192.168.158.131.log?

How about you try this in your transforms.conf (keep props.conf same)?

 [drop_hosts]
 SOURCE_KEY = Metadata:Host
 REGEX = 192\.168\.158\.131
 DEST_KEY = queue
 FORMAT = nullQueue

View solution in original post

Revered Legend

It will.. My bad, I didn't realize it's a TCP input directly coming on Indexers, so configurations are created in correct place. Now we should check if the entries are correct OR not. What is the actual Host name that you see in the log entries? Is it really 192.168.158.131.log?

How about you try this in your transforms.conf (keep props.conf same)?

 [drop_hosts]
 SOURCE_KEY = Metadata:Host
 REGEX = 192\.168\.158\.131
 DEST_KEY = queue
 FORMAT = nullQueue

View solution in original post

Explorer

Yes on the rsyslog server that's the actual entry in /var/log/remote.

That worked! Thank you sir.

0 Karma

Explorer

Are you able to whitelist/blacklist with Splunk as well. I may have some issues with the regex as the hostnames are all over the place. So for example we don't need to see any of the ipaddress.logs but we may need to see server1.log but not see server2. log.

0 Karma

Revered Legend

There is no blacklist/whitelist available for UDP input. You can add multiple hosts in the same regex and/or add more transforms.conf stanza's, if that's what you need. Like this

 props.conf
 [source::udp:514]
 TRANSFORMS-drop_hosts = drop_hosts_set1,drop_hosts_set2

 transforms.conf   
 [drop_hosts_set1]
 SOURCE_KEY = Metadata:Host
 REGEX = (192\.168\.158\.131)|(192\.168\.158\.132)|(....other hosts)
 DEST_KEY = queue
 FORMAT = nullQueue

[drop_hosts_set2]
 SOURCE_KEY = Metadata:Host
 REGEX = (192\.168\.159\.131)|(192\.168\.159\.132)|(....other hosts)
 DEST_KEY = queue
 FORMAT = nullQueue
0 Karma

Explorer

Ok yeah I can make that work. Thanks for your help.

0 Karma

Revered Legend

Few questions-
1) In where Splunk server did you place these config files, Your forwarder on syslog server OR Indexers?
2) If you've created these files on your forwarder at syslog servers, then is it a Universal Forwarder OR Heavy Forwarder? If it's Universal Forwarder, then it won't work as event filtering is not available for UF, you should keep it in Indexers (or Heavy Forwarder if there is any in between UF and Indexer).
3) Assuming, now it was kept in correct Splunk server, did you restart the Splunk service after making the change?

0 Karma

Explorer

We actually just forward it from our rsyslog server. We don't use splunk forwarders. Will this not work with those options on the indexer then?

0 Karma

Explorer

The indexer. Yes the Splunk services have been restarted.

0 Karma
State of Splunk Careers

Access the Splunk Careers Report to see real data that shows how Splunk mastery increases your value and job satisfaction.

Find out what your skills are worth!