Getting Data In

When parsing JSON events, why does Splunk round nanoseconds timestamp to 'none' and uses the indexed time as _time?

rusty009
Path Finder

I'm having real issues in parsing JSON events. I have a distributed Splunk setup and I have tested uploading the logs manually through Splunk Web on the search head with the below sourcetype and everything works perfectly,

[cloudflare]
INDEXED_EXTRACTIONS=json
KV_MODE=none
NO_BINARY_CHECK=true
SHOULD_LINEMERGE=true
TIMESTAMP_FIELDS=timestamp
TIME_FORMAT=%s%9N
TIME_PREFIX=^

Data is coming in through a universal forwarder. On the UF, I have the below settings,

[cloudflare]
INDEXED_EXTRACTIONS=json
KV_MODE = none
AUTO_KV_JSON = false

and on the indexers I have:

[cloudflare]
TIMESTAMP_FIELDS=timestamp
TIME_FORMAT=%s%9N
TIME_PREFIX=^

When the data comes in, it takes the timestamp of when it was indexed, not the timestamp value. There are also two timestamp fields per event, one with the nanosecond timestamp value and the other with 'none' . The value or string none appears nowhere in my event. When I hover over the timestamp I get a message pop up saying:

This value may have been rounded because it exceeds the maximum allowed int value.

Which is the error I was initially seeing when manually uploading the data on the search head so I added TIME_FORMAT=%s%9N which did the trick there, but doesn't seem to work on the indexers. I have swapped and change around the sourcetypes on the UF and the indexer, but it doesn't seem to do any good, what am I doing wrong? Screenshots attached of what I am seeing.
alt text

alt text

0 Karma
1 Solution

rusty009
Path Finder

So,

unfortunately I managed to find the answer. Splunk currently does not support nanoseconds, and the largest timestamp they can handle is microseconds ! So if you see yourself in the same situation, you will need to apply the below config to the search, (assuming your json strings timestamp field is called timestamp),

[cloudflare]
INDEXED_EXTRACTIONS=json
KV_MODE=none
NO_BINARY_CHECK=true
SHOULD_LINEMERGE=true
TIMESTAMP_FIELDS=timestamp
TIME_PREFIX=^timestamp:"
MAX_TIMESTAMP_LOOKAHEAD = 10

View solution in original post

0 Karma

rusty009
Path Finder

So,

unfortunately I managed to find the answer. Splunk currently does not support nanoseconds, and the largest timestamp they can handle is microseconds ! So if you see yourself in the same situation, you will need to apply the below config to the search, (assuming your json strings timestamp field is called timestamp),

[cloudflare]
INDEXED_EXTRACTIONS=json
KV_MODE=none
NO_BINARY_CHECK=true
SHOULD_LINEMERGE=true
TIMESTAMP_FIELDS=timestamp
TIME_PREFIX=^timestamp:"
MAX_TIMESTAMP_LOOKAHEAD = 10
0 Karma

jmallorquin
Builder

Hi,

UF doesn't parse anything, so you should put all the configuration in the indexer:

UF
inputs.conf

[monitor bla bla bla]
sourcetype=cloudflare
index= bla bla bla

INDEXER

props.conf

[cloudflare]
INDEXED_EXTRACTIONS=json
KV_MODE=none
NO_BINARY_CHECK=true
SHOULD_LINEMERGE=true
TIMESTAMP_FIELDS=timestamp
TIME_FORMAT=%s%9N
TIME_PREFIX=^

After configure the indexer you have to restart the service.

Hope i help you

0 Karma
Get Updates on the Splunk Community!

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...