Getting Data In

Event data filtering not working. How to debug?

Path Finder

I need to filter out certain unwanted events and send them to the nullQueue.

I added this in props.conf:

[access_logs]
TRANSFORMS-proj = Filter_ping

and this in transforms.conf

[Filter_ping]
REGEX = Request\:ping
DEST_KEY = queue
FORMAT = nullQueue

This is working perfectly in one env (splunk client+server) but not working in another env. Since the conf files are the same as are the versions of the splunk forwarder and servers. I'm at a lost as to why this filtering fails on the 2nd env.

Any suggestion as to how to debug this? splunkd.log looks normal. I verified that sourcetype access_logs is correct. Anywhere I can look to see what filtering the forwarder has loaded?

TIA

Tags (1)
0 Karma

Path Finder

Thanks for the pointer on metrics.log. When I compare the file between the two envs, I find references to the light-forwarder on the env where filtering is not working:

01-12-2011 01:29:21.021 INFO Metrics - group=pipeline, name=parsing, processor=send-out-light-forwarder, cpu_seconds=0.000000, executes=36, cumulative_hits=221543

01-12-2011 01:29:21.021 INFO Metrics - group=pipeline, name=parsing, processor=tcp-output-light-forwarder, cpu_seconds=0.000000, executes=36, cumulative_hits=221543

Since I know LWF does not support filtering, this could be the issue. However, I start splunk on both env the same way:

./splunk start --accept-license ./splunk enable app SplunkForwarder -auth admin:changeme ./splunk add forward-server 10.10.41.109:9997 -auth admin:changeme ./splunk disable webserver -auth admin:changeme ./splunk enable boot-start ./splunk restart

The enable command certainly does not activate the LWF. So where are processor=send-out-light-forwarder and tcp-output-light-forwarder specified?

Thanks.

0 Karma

Splunk Employee
Splunk Employee

Not knowing the environment details, I could only guess at the issue (given that it works in another environment).

If you're using a light weight forwarder, this configuration needs to live on the indexer. If you're using a regular forwarder, this configuration needs to live on the forwarder. You can look at event counts in metrics logs for group=pipeline to see if regex extraction is being run on the machine with the configuration.

On the machine with the configuration, you can check what Splunk is interpreting from the configuration files using:

splunk cmd btool props list access_logs

and

splunk cmd btool transforms list Filter_ping