Getting Data In

Filtering (discarding) logs using Heavy Forwarder. Regex filter fails after transforms reload

fahmed11
Explorer

I'm using an on-prem Heavy Forwarder to filter some noisy logs coming in via syslog (HF is installed on syslog server). Logs are then forwarded to our Splunk Cloud instances. 

I configured the inputs.conf, props.conf, and transforms.conf using the regex forwarding the garbage to a nullQueue index to drop the necessary traffic. I reloaded the transforms using the "refresh" URL below (without restarting the entire splunkd service described here). This was working perfectly as expected.

http://your-heavy-forwarder-splunk-server:8000/en-GB/debug/refresh

I recently made a change to drop some more logs in a different file. So, changes were made to different inputs, props, and transform config file than the first time. I used the same method to reload the transforms. As soon as I did that, for about 10 to 30 minutes the previous log filter stopped working and tons of garbage started flowing into our Splunk Cloud account (see the crazy bump shown below).

fahmed11_0-1617112670910.png

 

After a while it stopped on its own and the new filter works as expected as well (I'm so confused). However, as you can imagine, this crazy amount of logs flowing into Splunk Cloud every time we want to discard logs is counterintuitive to the whole exercise. 

 

I want to understand if this is a known issue and if there is a way around it.

 

 

0 Karma
Get Updates on the Splunk Community!

Splunk Observability for AI

Don’t miss out on an exciting Tech Talk on Splunk Observability for AI!Discover how Splunk’s agentic AI ...

Splunk Enterprise Security 8.x: The Essential Upgrade for Threat Detection, ...

Watch On Demand the Tech Talk on November 6 at 11AM PT, and empower your SOC to reach new heights! Duration: ...

Splunk Observability as Code: From Zero to Dashboard

For the details on what Self-Service Observability and Observability as Code is, we have some awesome content ...