Splunk Enterprise

Time field is not matching with _time

uagraw01
Motivator

Hello Splunkers!!

I want my _time to be extracted and match with time filed in the events. This is token based data. We are using http token to fetch the data from the kafka to Splunk and all the default setting are under search app including ( inputs.conf and props.conf). I have tried props in the second screenshot under search app but nothing works. Please help me what to do to get the required _time match with time field?

uagraw01_0-1731853047696.png


I have applied below settings but nothing work for me.

CHARSET = UTF-8
AUTO_KV_JSON = false
DATETIME_CONFIG =
INDEXED_EXTRACTIONS = json
KV_MODE = none
LINE_BREAKER = ([\r\n]+)
NO_BINARY_CHECK = true
TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6NZ
TIME_PREFIX = \"time\"\:\"
category = Custom
pulldown_type = true
TIMESTAMP_FIELDS = time

Labels (2)
0 Karma
1 Solution

bowesmana
SplunkTrust
SplunkTrust

So now you need to set the time prefix to match your actual raw text, i.e.

"time":"2024...

AND you need the lookahead set, because time is at the end of your JSON. Your raw data does not appear to have any whitespace in between the fields/colon/value, so try

 

MAX_TIMESTAMP_LOOKAHEAD = 550
TIME_PREFIX = \"time\":\"
TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%Z

 

EDIT: MAX_TIMESTAMP_LOOKAHEAD not needed - see @PickleRick comment below.

View solution in original post

PickleRick
SplunkTrust
SplunkTrust

I'm not 100% sure if "normal" time extraction works with indexed extractions. You could try setting TIMESTAMP_FIELDS

Also - why indexed extractions? Why not just KV_MODE=json?

0 Karma

uagraw01
Motivator

@PickleRick I already tried and added attribute under props  but this also not working.

 "TIMESTAMP_FIELDS = time" and added KV_MODE=json 

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Wait, but if your local timezone is EST and your profile is configured with EST, that's actually the proper timestamp.

The source is reporting 14:15 UTC so it's 9:15 EST

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

What time zone are you in? The time shown in the _time field is in your local time zone which appears to be 5 hours different from the time in the log. Is this the discrepancy you are seeing?

0 Karma

uagraw01
Motivator

@ITWhisperer  That timezone difference I can exclude by using TZ setting attribute in props. But I am having another issue with nano seconds.

Other issue is the nano second issue.

uagraw01_0-1731923712032.png

 

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

I have lost count of the number of times we have suggested (requested) that event data is show in raw format (preferably in a code block using the </> button). Splunk will be processing the raw data, not the formatted, "pretty" version you have shown us. In light of this, is your actual raw event data a JSON object, and therefore wouldn't the TIME_PREFIX be more like "time":" (perhaps with some spaces \s)?

0 Karma

uagraw01
Motivator

@ITWhisperer  Thanks for information. 

Yes, My actual data is in the json format. Could you please suggest what I need to do with props so the events can parse properly with timestamp filed of the events.

0 Karma

bowesmana
SplunkTrust
SplunkTrust

Your data is JSON, but you are showing a screenshot of Splunk presenting that data to you in a formatted way. Please click the show as raw text and show what time looks like in the RWA data, not the pretty-printed version.

bowesmana_0-1731993175382.png

 

0 Karma

uagraw01
Motivator

Hi @bowesmana  raw text is look like as below.

{"traceid":"00000000000000000033000000000000","spanid":"0000000000000000","datacontenttype":"application/json","data":{"retentionMinutes":43200,"batchSize":1000,"windowDurationSeconds":600},"messages":[],"specversion":"1.0","classofpayload":"com.vl.decanter.decanter.generici.domain.command.PurgeCommand","id":"ccbae519-foa4-4c0c-ad75-261720d764e5","source":"decanter-scheduler","time":"2024-11-19T09:30:00.058376Z","type":"PurgeEventOutbox"}

0 Karma

bowesmana
SplunkTrust
SplunkTrust

So now you need to set the time prefix to match your actual raw text, i.e.

"time":"2024...

AND you need the lookahead set, because time is at the end of your JSON. Your raw data does not appear to have any whitespace in between the fields/colon/value, so try

 

MAX_TIMESTAMP_LOOKAHEAD = 550
TIME_PREFIX = \"time\":\"
TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%Z

 

EDIT: MAX_TIMESTAMP_LOOKAHEAD not needed - see @PickleRick comment below.

uagraw01
Motivator

@bowesmana Thanks for the solution and investing your valuable time.  But still micro seconds are not matching. 

uagraw01_0-1732095727006.png

 

 

 

0 Karma

bowesmana
SplunkTrust
SplunkTrust

Splunk will only ever show milliseconds for the _time field, but if you do

...
| head 5
| eval actual_time_value=_time
| table _time actual_time_value

You can see the time field as it is stored as a number

 

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

This does not show evidence of the microseconds not matching. The Time field is merely displayed to millseconds not microseconds.

ITWhisperer_0-1732102220985.png

 

PickleRick
SplunkTrust
SplunkTrust

According to the docs the MAX_TIMESTAMP_LOOKAHEAD is applied _from_ the TIME_PREFIX-defined location.

uagraw01
Motivator

@ITWhisperer On EST time the server is .

 

@bowesmana I have tried below settings but nothings works for me. Is there any workaround I need to apply.

CHARSET = UTF-8
#AUTO_KV_JSON = false
DATETIME_CONFIG =
#INDEXED_EXTRACTIONS = json
KV_MODE = json
LINE_BREAKER = ([\r\n]+)
NO_BINARY_CHECK = true
MAX_TIMESTAMP_LOOKAHEAD = 550
TIME_PREFIX = time:\s+
TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%Z
category = Custom
pulldown_type = true


Example Pattern of event :

{ [-]
classofpayload: com.v.decanter.deca.generic.domain.command.PurgeCommand
data: { [-]
batchSize: 1000
retentionMinutes: 43200
windowDurationSeconds: 600
}
datacontenttype: application/json
id: 32e31ec6-2362-4b46-966e-ec4bdbb3llbe
messages: [ [-]
]
source: decanter-scheduler
spanid: 0000000000000000
specversion: 1.0
time: 2024-11-18T04:15:00.057785Z
traceid: 00000000000000000000000000000000
type: PurgeEventOutbox
}

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

So the time is out by exactly 5 hours which represents your timezone, therefore it is correct. Are there any other discrepancies apart from this (which is now accounted for)?

0 Karma

bowesmana
SplunkTrust
SplunkTrust

Depending on what your _raw event looks like you may have to set

MAX_TIMESTAMP_LOOKAHEAD

as the default lookahead is only 128 characters.

Also make sure the raw event doesn't have any whitespace between the JSON name/value - you're not allowing for any whitespace in your regex

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Where are the props installed?  They must be on the first full instance (indexer or heavy forwarder) that touches the data.

If the data is being onboarded via HEC, then it's possible the usual ingestion pipeline is bypassed.  Which HEC endpoint is used?

BTW, to recognize the time zone, the TIME_FORMAT setting should be

TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%Z

 

---
If this reply helps you, Karma would be appreciated.
0 Karma

uagraw01
Motivator

@richgalloway 

1. Props is  installed on search app

2. The setting is on Indexer itself and I am using below endpoint.

3. Endpoint is : services/collector/raw

4. I will try and add %Z in my current props.

 

Thanks

0 Karma

richgalloway
SplunkTrust
SplunkTrust

1)  OK, but search is Splunk's app.  Your settings should be in your own app.

2) Is the HEC endpoint on the indexer?  If not, the props are doing anything.  Make sure the props are on the same instance as HEC.

4) As Yoda would say, "do or do not, there is no try"

---
If this reply helps you, Karma would be appreciated.
0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...