Getting Data In

Extracting Timestamps from JSON logs in Splunk 6.5.0

baegoon
Explorer

I have a JSON formatted event and I am trying to get props.conf to recognize the timestamp. The timestamp occurs at the beginning of the event with "ts": (see example event below)
I have in my custom props.conf the following:

KV_MODE=json
TIME_PREFIX = "ts": "
TIME_FORMAT = %s.%6N
#DATETIME_CONFIG =
MAX_TIMESTAMP_LOOKAHEAD = 3
NO_BINARY_CHECK = true
SHOULD_LINEMERGE = false
TZ = UTC

I have also tried

INDEXED_EXTRACTIONS = json
TIME_PREFIX = "ts": "
TIME_FORMAT = %s.%Q
KV_MODE=none

Which is better INDEXED_EXTRACTIONS or KV_MODE for 6.5.0? And I assume my regex for the timestamp field is also not extracting correctly as is the EPOCH timestamp?
This also does not work in the Data Input part of Splunk when indexing the file. I can't get the timestamp extracted properly.
Lastly is the order of the stanza important as well?

Help me Splunkers!!!

{"ts":1475380313.087024,"uid":"CY8PlE1b4UHBBIE6ql","id.orig_h":"12.23.56.78","id.orig_p":62359,"id.resp_h":"172.217.4.206","id.resp_p":443,"fuid":"FAEKzAJTlOkNOzjZ8","file_mime_type":"application/pkix-cert","file_desc":"172.217.4.206:443/tcp","seen.indicator":"google-analytics.com","seen.indicator_type":"Intel::DOMAIN","seen.where":"X509::IN_CERT","seen.node":"bro","sources":["from http://hosts-file.net/ad_servers.txt via intel.criticalstack.com"]}
0 Karma
1 Solution

sshelly_splunk
Splunk Employee
Splunk Employee

Here is what I have:
TIME_PREFIX=^{"ts":
TIME_FORMAT=%s.%6Q
Seems to work for me. I got _time= 10/1/16 10:51:53.087 PM

View solution in original post

0 Karma

sshelly_splunk
Splunk Employee
Splunk Employee

Here is what I have:
TIME_PREFIX=^{"ts":
TIME_FORMAT=%s.%6Q
Seems to work for me. I got _time= 10/1/16 10:51:53.087 PM

0 Karma

baegoon
Explorer

OK That works!!!!

The MAX_TIMESTAMP_LOOKAHEAD = 6 should be ZERO!!! THANKS SShelly! It was really simple. Now on to BRO DASHBOARDS!!!

INDEXED_EXTRACTIONS = json
TIME_PREFIX=^{"ts":
TIME_FORMAT=%s.%6Q
KV_MODE=none
NO_BINARY_CHECK = true
SHOULD_LINEMERGE = false
TZ = UTC
category = Custom
description = Intel framework for BRO IDS
disabled = false
pulldown_type = true

baegoon
Explorer

Thanks for the response. I just tried that and even re-indexed the data so I get an error about the timestamp being beyond the Jan 1, 1970 and I need to adjust MAX_DAYS_AGO and MAX_DAYS_HENCE. Do you happen to have these set as well? otherwise I have to use an eval statement during search time.

0 Karma

sshelly_splunk
Splunk Employee
Splunk Employee

I think if you are getting the error about being pre-1970, it is a TIME_FORMAT issue. I would make sure that the sourcetype you are using is not defined in multiple locations. If on linux, you can do a 'find $SPLUNK_HOME/etc -name props.conf | xargs grep "you sourcetype name"' (minus the quotes and ticks), and see if it appears in multiple locations. I indexed with only those 2 attributes defined/assigned, and I got the date of oct 1, 2016. If you could post your props.conf, than we can take a look at it as well.
-hth

0 Karma

baegoon
Explorer

Sure here is the complete props.conf file that is in my app. I did do a search and that sourcetype is only listed in that file.

[bro-intel]
INDEXED_EXTRACTIONS = json

TIME_PREFIX =/["][t][s]["][:]/

TIME_FORMAT = %s.%Q

TIME_PREFIX=^{"ts":
TIME_FORMAT=%s.%6Q
KV_MODE=none

DATETIME_CONFIG =

MAX_TIMESTAMP_LOOKAHEAD = 6
NO_BINARY_CHECK = true
SHOULD_LINEMERGE = false
TZ = UTC
category = Custom
description = Intel framework for BRO IDS
disabled = false
pulldown_type = true

0 Karma

sshelly_splunk
Splunk Employee
Splunk Employee

remove the timestamp lookahead. If you want to use it, it must be set to 10 or larger for this data (just checked and confirmed it in my environment), but I did not use it.

0 Karma
Get Updates on the Splunk Community!

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...