Getting Data In

Filtering (discarding) logs using Heavy Forwarder. Regex filter fails after transforms reload

fahmed11
Explorer

I'm using an on-prem Heavy Forwarder to filter some noisy logs coming in via syslog (HF is installed on syslog server). Logs are then forwarded to our Splunk Cloud instances. 

I configured the inputs.conf, props.conf, and transforms.conf using the regex forwarding the garbage to a nullQueue index to drop the necessary traffic. I reloaded the transforms using the "refresh" URL below (without restarting the entire splunkd service described here). This was working perfectly as expected.

http://your-heavy-forwarder-splunk-server:8000/en-GB/debug/refresh

I recently made a change to drop some more logs in a different file. So, changes were made to different inputs, props, and transform config file than the first time. I used the same method to reload the transforms. As soon as I did that, for about 10 to 30 minutes the previous log filter stopped working and tons of garbage started flowing into our Splunk Cloud account (see the crazy bump shown below).

fahmed11_0-1617112670910.png

 

After a while it stopped on its own and the new filter works as expected as well (I'm so confused). However, as you can imagine, this crazy amount of logs flowing into Splunk Cloud every time we want to discard logs is counterintuitive to the whole exercise. 

 

I want to understand if this is a known issue and if there is a way around it.

 

 

0 Karma
Get Updates on the Splunk Community!

.conf24 | Day 0

Hello Splunk Community! My name is Chris, and I'm based in Canberra, Australia's capital, and I travelled for ...

Enhance Security Visibility with Splunk Enterprise Security 7.1 through Threat ...

(view in My Videos)Struggling with alert fatigue, lack of context, and prioritization around security ...

Troubleshooting the OpenTelemetry Collector

  In this tech talk, you’ll learn how to troubleshoot the OpenTelemetry collector - from checking the ...