Getting Data In

Why does splunk think it can't parse my timestamp

lyndac
Contributor

I am seeing some odd behavior. My setup is this: Splunk 6.3.1 Enterprise, 1 search head, 4 indexers, 1 forwarder Plus licence manager/deployment server.
The Props.conf file is on the search head, all indexers and forwarder. It looks like this:

[json_foo]
FIELDALIAS-curlybrace=office{} as office processors{} as processors
INDEXED_EXTRACTIONS=json
KV_MODE=none
MAX_TIMESTAMP_LOOKAHEAD=30
NO_BINARY_CHECK=true
TIMESTAMP_FIELDS= upTime
TIME_FORMAT= %Y-%m-%dT%H:%M:%S%Z

The inputs.conf file on the forwarder looks like this

[batch:///data/ingest/json-data]
index=foo
sourcetype=json_foo
move_policy=sinkhole
blacklist=\..*\.json

The issue I'm seeing is that I am getting " DateParserVerbose - Failed to parse timestamp. Defaulting to timestamp of previous ..." messages for some events. Typically it is all events in a certain file that cause the error. The error shows up in the splunkd.log file on the forwarder AND whichever indexer indexes the event.

A sample of the data is:

[
{
  identifier: "ccce-da12-83ac",
  city: "baltimore",
  upTime: "2016-04-22T16:40:15Z",
  filesize: 14423,
  user: ["user1", "user2", "user3"]
},
{
  identifier: "cc32s-da12-83de",
  city: "paris",
  upTime: "2016-04-22T16:43:52Z",
  filesize: 1223,
  user: ["user1", "user2"]
}
]

When I look at the source files, the data looks fine, and when I examine what is indexed in splunk, the upTime value and _time are the same. I've even compared the non-readable values and they are identical.

index=foo | eval mytime=_time | eval mytime2=strptime(upTime,"%Y-%m-%dT%H:%M:%S%Z") | table mytime, mytime2

I'm hoping i can just ignore this since it is just a warning and the values are correct. Can anyone see if I am doing something wrong? Any help would be appreciated.

0 Karma
1 Solution

twinspop
Influencer

MAX_TIMESTAMP_LOOKAHEAD is telling splunk to only look 30 bytes into the message. Your timestamp is well beyond the 30 byte marker.

View solution in original post

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

Off topic: If you upgrade your version you can skip that alias with this props.conf entry:

JSON_TRIM_BRACES_IN_ARRAY_NAMES = <bool>
* Tell the json parser not to add the curly braces to array names.
* Note that enabling this will make json indextime extracted array fiels names
  inconsistant with spath search processor's naming convention.
* For a json document containing the following array object, with trimming 
  enabled a indextime field 'mount_point' will be generated instead of the 
  spath consistant field 'mount_point{}'
      "mount_point": ["/disk48","/disk22"]
* Defaults to false.
0 Karma

lyndac
Contributor

This is nifty but are there any consequences? The docs imply I wouldn't be able to use spath if I trim the braces.

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

If you use INDEXED_EXTRACTIONS you won't really need spath any more.

0 Karma

twinspop
Influencer

MAX_TIMESTAMP_LOOKAHEAD is telling splunk to only look 30 bytes into the message. Your timestamp is well beyond the 30 byte marker.

0 Karma

lyndac
Contributor

Oh geez...I was thinking TIMESTAMP_FIELDS behaved the same as TIME_PREFIX and the counter started when the field was found. I adjusted this value, but still get the same warning.

Turns out the data was actually MISSING the upTime field. Don't know how I missed that.

0 Karma
Get Updates on the Splunk Community!

Dashboards: Hiding charts while search is being executed and other uses for tokens

There are a couple of features of SimpleXML / Classic dashboards that can be used to enhance the user ...

Splunk Observability Cloud's AI Assistant in Action Series: Explaining Metrics and ...

This is the fourth post in the Splunk Observability Cloud’s AI Assistant in Action series that digs into how ...

Brains, Bytes, and Boston: Learn from the Best at .conf25

When you think of Boston, you might picture colonial charm, world-class universities, or even the crack of a ...