Getting Data In

Can a Universal forwarder filter lines from log?

Engager

I've read the docs on how to filter events from:
http://docs.splunk.com/Documentation/Splunk/4.3.3/Deploy/Routeandfilterdatad

The documentation makes mention that somethings the light and "Universal forwarder" cannot do... is this one of those things? If so where DO you filter this to keep from getting it into the DB?

The log lines with "ipmon" text still are sent. The universal forwarder is running on a solaris 10 host.

My configuration is:

/opt/splunkforwarder/etc/apps/search/local/inputs.conf
[monitor:///var/log/local0/debug]
disabled = false
## filter ipmon logs out of forwarded logs
sourcetype = local0_syslog
queue = parsingQueue

/opt/splunkforwarder/etc/system/local/props.conf

[local0_syslog]
TRANSFORMS-null= setnull_ipmon

/opt/splunkforwarder/etc/system/local/transform.conf

[setnull_ipmon]
#match anything with ipmon and toss it
REGEX =ipmon
DEST_KEY = queue
FORMAT = nullQueue
1 Solution

Splunk Employee
Splunk Employee

You will need to move the props.conf and transforms.conf to your indexer or heavy forwarder. The universal forwarder does not process props and transforms since those pipelines are not turned on.

View solution in original post

Explorer

No, Splunk has two kinds of forwarders, Universal and Heavy.

  • Universal Forwarder is meant to be lightweight and sends data to heavy forwarders/Splunk Enterprise instances without parsing.
  • Heavy Forwarder is another name for a fully Splunk Enterprise instance. As such, it can do all kinds of parsing and filtering it needs to do.

In that case, why not use Heavy Forwarders everywhere? The reason is resource footprint. When you run tens of thousands of servers and VMs (or millions of containers, if you are using Docker, Kubernetes and such), resource footprint will start to matter. For example, if you have 20,000 VMs and there's 50MB difference in memory usage between heavy and light log collectors per instance, you are talking about 1TB of difference in RAM usage. That's anywhere between 20-50k of hardware cost, virtual or physical.

If you are looking to get the filtering/parsing capabilities of Splunk Heavy Forwarder with the resource footpring of Universal Forwarder, and you want to send data to Kafka, Hadoop, Amazon S3 and pretty much any other backend systems, you might want to look at Fluentd Enterprise.

Splunk Employee
Splunk Employee

You will need to move the props.conf and transforms.conf to your indexer or heavy forwarder. The universal forwarder does not process props and transforms since those pipelines are not turned on.

View solution in original post

Contributor

You can write props.conf and transforms.conf in /opt/splunk/etc/deployment-apps/_server_app_<server_class>/local (alongside inputs.conf), making sure the props.conf [<sourcetype>] and [source::<source>] stanzas specify force_local_processing = true. When ready, issue the command line splunk reload deploy-server to deploy these to the forwarders and they'll do the indexing (and accompanying SEDCMD and TRANSFORMS) instead of the indexer. See https://answers.splunk.com/answers/615924/ for a detailed example.

0 Karma

Explorer

If i deploy props.conf and transforms.conf on an indexer and my forwarder is not a heavy one, will this setup work?

0 Karma