Getting Data In

Heavy Forwarder - Filtering Juniper events

braxton839
Explorer

Greetings, I have been reading through documentation and responses on here about filtering out specific events at the heavy forwarder (trying to reduce our daily ingest).

In the local folder for our Splunk_TA_juniper app I have created a props.conf and a transforms.conf and set owner/permissions to match other .conf files.

props.conf:
# Filter teardown events from Juniper syslogs into the nullqueue
[juniper:junos:firewall:structured]
TRANSFORMS-null= setnull

transforms.conf
# Filter juniper teardown logs to nullqueue
[setnull]
REGEX = RT_FLOW_SESSION_CLOSE
DEST_KEY = queue
FORMAT = nullQueue

I restarted the Splunk service... but I'm still getting these events.

Not sure what I did wrong. I pulled some raw event text and tested the regex in PowerShell (worked with -match).

Any help would be greatly appreciated!

Labels (1)
0 Karma
1 Solution

livehybrid
SplunkTrust
SplunkTrust

Hi @braxton839 

If they are HF then the config should work - you'll need to restart the HFs after deploying. 

== props.conf ==
[juniper]
TRANSFORMS-aSetnull = setnull

== transforms.conf ==
# Filter juniper teardown logs to nullqueue
[setnull]
REGEX = RT_FLOW_SESSION_CLOSE
DEST_KEY = queue
FORMAT = nullQueue

If its coming in with the juniper sourcetype then Im not sure why this wouldnt work. Its worth double checking for typos etc. I assume there are no other props/transforms that you have customised which alter the queue value?

Ive updated the TRANSFORMS- suffix on the above from the original to see if ordering makes any difference here, this should change the precedence and be applied before other things like sourcetype renaming.

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

  

View solution in original post

livehybrid
SplunkTrust
SplunkTrust

Hi @braxton839 

The config you have looks good, but I think the issue here is that the sourcetype you are referencing (juniper:junos:firewall:structured) is actually being set during the parsing process and therefore wouldnt be picked up by your custom props/transforms. This is overwriting the existing sourcetype by the transform "force_sourcetype_for_junos_firewall_structured".

Instead you could try the following:

== props.conf ==
[source::....junos_fw]
TRANSFORMS-null= setnull

[juniper]
TRANSFORMS-null= setnull

== transforms.conf ==
# Filter juniper teardown logs to nullqueue
[setnull]
REGEX = RT_FLOW_SESSION_CLOSE
DEST_KEY = queue
FORMAT = nullQueue

 

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

0 Karma

braxton839
Explorer

Thank you for your response!

Unfortunately, that did not work. 😞

0 Karma

livehybrid
SplunkTrust
SplunkTrust

Hi @braxton839 

Just to check, this is a HF, not a UF right? And doesnt come through another HF before reaching this?

Are you able to confirm what sourcetype the syslog input is set to on this host? Im assuming its "juniper" but if its anything else then the props.conf stanza I supplied would need updating. 

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

0 Karma

braxton839
Explorer

Yes, this is a Heavy Forwarder (to be specific, 2 Heavy Forwarders). Juniper device events logs are sent directly to these Heavy Forwarders.

According to our inputs.conf file the sourcetype for these events is:

juniper

0 Karma

livehybrid
SplunkTrust
SplunkTrust

Hi @braxton839 

If they are HF then the config should work - you'll need to restart the HFs after deploying. 

== props.conf ==
[juniper]
TRANSFORMS-aSetnull = setnull

== transforms.conf ==
# Filter juniper teardown logs to nullqueue
[setnull]
REGEX = RT_FLOW_SESSION_CLOSE
DEST_KEY = queue
FORMAT = nullQueue

If its coming in with the juniper sourcetype then Im not sure why this wouldnt work. Its worth double checking for typos etc. I assume there are no other props/transforms that you have customised which alter the queue value?

Ive updated the TRANSFORMS- suffix on the above from the original to see if ordering makes any difference here, this should change the precedence and be applied before other things like sourcetype renaming.

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

  

braxton839
Explorer

Thank you so much!

0 Karma

PickleRick
SplunkTrust
SplunkTrust

One more thing - try to be more creative with your transform names. A "common" name like "setnull" can easily cause a collision with an identically named transform defined elsewhere.

BTW, why not just _not_ send those events from the JunOS? You'll have both lower CPU load on the box and less work on the receiving end?

braxton839
Explorer

Thank you for the tip about transform names, adding that to my Splunk notes.
Hoping this filtering is only a temporary solution. I do want to stop the juniper equipment from sending "RT_FLOW_SESSION_CLOSE" logs once our team has more time.

0 Karma
Get Updates on the Splunk Community!

Observe and Secure All Apps with Splunk

  Join Us for Our Next Tech Talk: Observe and Secure All Apps with SplunkAs organizations continue to innovate ...

Splunk Decoded: Business Transactions vs Business IQ

It’s the morning of Black Friday, and your e-commerce site is handling 10x normal traffic. Orders are flowing, ...

Fastest way to demo Observability

I’ve been having a lot of fun learning about Kubernetes and Observability. I set myself an interesting ...