Splunk Search

Juniper firewall search time field extraction not working "sometimes"

Jarohnimo
Builder

Hello,

I need help fixing an issue with search time field extractions in juniper fw logs (very chatty). The issue isn't actually the props or transforms (I don't believe). They extract the events perfectly....."most of the time". But every now and again you click on an event and notice that it's fields haven't been extracted. The events with the missing field extractions are similar in context as the ones that work so no obvious reasons as to why it's not following the rule book of the props and transforms set before them. Doesn't make sense that it works all the time for majority firewalls and only works sometimes for 3. It is noted that within 1 second there's about 113 events (all sharing the same second time stamp) however there's other firewalls that are sending the same amount of events and never have the issue.

All firewalls (host) are using the exact same props and transforms.conf files via deployment server sent

For example the exact same server that was having the issue before around 1pm isn't having it right now. So I guess it's somewhat sporadic. It's only effecting 3 host out of 22 (now these 3 host are on the top side of events, though there's a 4th on the topside with the same # of events and never has issues).

Juniper fw logs > hf/syslog server > index cluster

Any thoughts of what to look into as to why for only a few firewalls are having this issue. Thanks

0 Karma

Jarohnimo
Builder

Hi,

I ended up running btool on the sh and saw some local transforms.conf entries. I've noted that ipv6 is the culprit as the app was designed around ipv4 only. So when events for ipv6 come in it causes a number of extractions not to occur.

0 Karma

wyfwa4
Communicator

Did you manage to fix the issue? or is the issue still occuring with the ipv6 events?

0 Karma

Jarohnimo
Builder

It was ipv6 as well as net work interface that needed to be fixed in the transforms regex

0 Karma

wyfwa4
Communicator

Is this on a single indexer? or through multiple indexers?

It is also not clear if you are referring to index time field extraction or search time - each will have completely different root causes and ways to diagnose the issue.

In most cases this can be down to even a single character or extra space in the raw event which makes the regular expression not work. Maybe some examples of events that work and some that do not work may help. For example if occurring at specific times in the day, maybe the timestamp in the raw event changes from single digit hour to double digit hour

0 Karma

Jarohnimo
Builder

Search time field extractions, and once again there's not an issue with the props or transforms regex. The fields extract perfectly most of the time...

I stated this is firewall logs going to a hf/syslog then going to a index cluster... There's a few sh using the index cluster.

Only for 3 firewalls out of 22 do we see the issue where certain events aren't obeying the search time field extractions though the regex to extract them are correct (the same regex used to extract the working 22)

0 Karma

wyfwa4
Communicator

I would not rule out a regex issue. I am not saying there is a problem with the props or transforms, but the source data- we have seen issues with firewall logs where local configurations on each firewall meant the structure of each event was slightly different from different devices.

I am also assuming that you have checked the full list of fields that are extracted - i.e many fields will be hidden if Splunk thinks they do not contain useful information. I just want to cover everything and without additional information, I have no way of knowing how familiar you are with Splunk.

You also mention that you have multi searchheads - and an index cluster. When searching, the field extractions on each indexer are controlled by the search head configuration, but this has to be replicated to each indexer through the knowlegde bundle (https://docs.splunk.com/Documentation/Splunk/8.0.3/DistSearch/Knowledgebundlereplication). I have seen issues with problems on individual indexers where replication is delayed or corrupted and this is not immediately obvious when you get your search results.

0 Karma
.conf21 CFS Extended through 5/20!

Don't miss your chance
to share your Splunk
wisdom in-person or
virtually at .conf21!

Call for Speakers has
been extended through
Thursday, 5/20!