Getting Data In

Unable to get timestamp for Fortigate logs

scottsavareseat
Path Finder

I have an indexer cluster and a search head environment. I've deployed the Splunk_TA_fortinet_fortigate app on both the search head and the cluster. Logs come in via syslog to syslog-ng where they are shipped to the indexer via HTTP Event Collector's raw endpoint.

Logs come in via the fortigate_log sourcetype. Then the TA sets the sourcetype to the correct type via a transform. Those sourcetypes then specify TIME_PREFIX=^ in the props.conf. However, this doesn't work as there is no date field for Splunk to get the date from. There is a time field though... (see sample data below).

What I want is to use the eventtime field as the timestamp. So, I create a local/props.conf file that looks like this:

 

[fortigate_log]
MAX_TIMESTAMP_LOOKAHEAD = 0
TIME_PREFIX = eventtime=
TIME_FORMAT = %s%9N

[fgt_log]
MAX_TIMESTAMP_LOOKAHEAD = 0
TIME_PREFIX = eventtime=
TIME_FORMAT = %s%9N

[fortigate_traffic]
MAX_TIMESTAMP_LOOKAHEAD = 0
TIME_PREFIX = eventtime=
TIME_FORMAT = %s%9N

[fortigate_utm]
MAX_TIMESTAMP_LOOKAHEAD = 0
TIME_PREFIX = eventtime=
TIME_FORMAT = %s%9N

[fortigate_anomaly]
MAX_TIMESTAMP_LOOKAHEAD = 0
TIME_PREFIX = eventtime=
TIME_FORMAT = %s%9N

[fortigate_event]
MAX_TIMESTAMP_LOOKAHEAD = 0
TIME_PREFIX = eventtime=
TIME_FORMAT = %s%9N

 

The idea comes from a few places, notably these documents:

I deployed it to the indexer cluster first just in the traffic, utm blocks thinking that I need to override what is in the default props.conf. But that didn't help. So I added it to the incoming fortigate_log sourcetype hoping it would do the time extraction earlier in the ingestion process. Neither seems to do anything. I also tried putting it on the search head thinking that the configuration bundle it sends to the cluster may be overriding my config. Still nothing.

What am I doing wrong? Any ideas?

Thanks,

Scott

Sample data:

 

time=23:59:59 devname="hostname" devid="devid" slot=1 logid="0000000020" type="traffic" subtype="forward" level="notice" vd="root" eventtime=1630645200040167048 tz="-0500" srcip=1.2.3.4 srcport=35847 srcintf="port25" srcintfrole="undefined" dstip=2.3.4.5 dstport=49164 dstintf="port26" dstintfrole="undefined" srcuuid="xxx" dstuuid="xxx" sessionid=455176702 proto=17 action="accept" policyid=10873 policytype="policy" poluuid="xxxx" service="udp/49164" dstcountry="United States" srccountry="United States" trandisp="noop" duration=11699117 sentbyte=49457564285 rcvdbyte=0 sentpkt=164980295 rcvdpkt=0 appcat="unscanned" sentdelta=49457564285 rcvddelta=0

 

 

Labels (2)
0 Karma

Stjubit
Explorer

Did you ever get this to work? If yes, could you please post your `props.conf`?

0 Karma

scottsavareseat
Path Finder

No, I haven't. What I wound up doing on the syslog-ng side is to add a timestamp at the very front of the event. Splunk parsed that properly. It isn't the "event time" per se. Its the time syslog received it, but its close enough.

This was a case of me doing something then moving on. I learned recently that sometimes you have to restart the indexers to get them to parse props and transforms properly. I don't know why that's the case. But it seemed to help another project I was on with similar issues. I suggest trying that to see what happens.

0 Karma

Stjubit147
Loves-to-Learn Lots

Thanks for the quick reply!

I was able to get it working with the following additional props.conf settings:

[fortigate_log]
TIME_FORMAT = %s%9N
TIME_PREFIX = eventtime\=
MAX_TIMESTAMP_LOOKAHEAD = 200

[fortigate_traffic]
TIME_FORMAT = %s%9N
TIME_PREFIX = eventtime\=
MAX_TIMESTAMP_LOOKAHEAD = 200

[fortigate_utm]
TIME_FORMAT = %s%9N
TIME_PREFIX = eventtime\=
MAX_TIMESTAMP_LOOKAHEAD = 200

[fortigate_anomaly]
TIME_FORMAT = %s%9N
TIME_PREFIX = eventtime\=
MAX_TIMESTAMP_LOOKAHEAD = 200

[fortigate_event]
TIME_FORMAT = %s%9N
TIME_PREFIX = eventtime\=
MAX_TIMESTAMP_LOOKAHEAD = 200

 I'm not sure if you have to restart the indexers, but I did a rolling restart.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Yes. You have to restart indexers to re-read configs. That's one of the reasons why many people tend to use layer of HF's in front of indexers even though the official recommendation is to use UFs to send directly to indexers.

0 Karma

scottsavareseat
Path Finder

Not really true... If you issue an `apply cluster-bundle`, from your cluster master it will deploy the configs and try to decide if it needs to restart the indexers or not. There are other ways to do a reload of config in Splunk and the cluster master tries to do the least impactful. For things like this though, I'm learning to also issue a `rolling-restart cluster-peers` from the cluster master which actually forces a restart of Splunk on each node.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Yes. Not all changes need restart but... sometimes master says restart is not needed and then you do apply cluster-bundle and it restarts the peers. Not often, but it happens.

I know that there are downsides to having HFs but ability to restart HFs freely without touching indexers is really great.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

OK. So your syslog-ng is sending data directly to indexers? You don't have any HF's?
Then you should be doing time parsing on the ingesting component - indexers. (search head has nothing to do with time parsing).

First things to check - whether the sourcetype the syslog-ng sends to your HEC endpoints (or the default one set on an input if you're not setting it explicitly) is set properly.

Check with btool (splunk btool props list) what are the effective settings for your sourcetypes.

Are you sure you're using raw endpoint, not event one?

0 Karma

scottsavareseat
Path Finder

Thanks... btool confirms that the indexers have the right data. And the fact that the addon's sourcetype transform is working confirm that the sourcetype comes in right.

And I'm definitely using the raw endpoint straight to the load balancers via HEC. My only thought with the search head as it sends configuration bundles to indexer clusters.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Search head doesn't send configuration bundle. It sends knowledge bundle on search. That's something different.

Check what really the request to HEC looks like or do some debug on syslog-ng side to see how it sends data. Can't help here I'm from the church of rsyslog 😉

0 Karma

ro_mc
Path Finder

Try setting the MAX_TIMESTAMP_LOOKAHEAD to a value other than zero.  This is the only variation from the app guidance provided via Splunkbase.

With the TIME_PREFIX correctly applied, a value of 20 would be appropriate, though the default MAX_TIMESTAMP_LOOKAHEAD should be sufficient, given the fairly unique TIME_FORMAT. 

Alternatively, remove the TIME_PREFIX as this is the most restrictive stanza properties. I.e. if the prefix cannot be found, the time will not be extracted. In this case, you may need to extend the MAX_TIMESTAMP_LOOKAHEAD to a value greater than the default, which I believe is 150 characters.

0 Karma

scottsavareseat
Path Finder

I can't get rid of TIME_PREFIX, unfortunately. The first field is time= in the sample data above. Splunk sees that and sets the time stamp from there (but there is no date there so it can't timestamp the data properly). What I need it to do is  to use the eventtime field later on and ignore time altogether. Hence the need for the prefix. Tell splunk where to find the epoch time. And if I use TIME_PREFIX, it should override MAX_TIMESTAMP_LOOKAHEAD. But I'm setting that to 0 to let it search the whole event, just in case.

0 Karma

scottsavareseat
Path Finder

I couldn't get this to work in props.conf. I wound up changing the syslog-ng config to add a ISO timestamp to the front of the message sent to splunk. Sort of a hack, but solved the problem very easily and doesn't require me to modify the Fortigate addon.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @scottsavareseat,

it's correct to change the TIME_PREFIX but beware: "=" is a special char for regexes so you should try:

TIME_PREFIX = eventtime\=

Then don't use the TIME_FORMAT option because it's in epochtime so Splunk can directly read it.

Ciao.

Giuseppe

0 Karma

scottsavareseat
Path Finder

Thanks, but adding the backslash to TIME_PREFIX in the props.conf didn't help. I also tried removing the TIME_FORMAT but not working either. It isn't epoch time... It is epoch plus nanoseconds.

Do I need to make the change on the search head also? Or is it sufficient to just make the change on the cluster master and apply cluster bundle to send the changes to the indexers?

 

0 Karma
Get Updates on the Splunk Community!

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...