Hello,
I have user event logs that I'm trying to ingest over TCP. Every event is a JSON like this:
{key1:v1,....,event:{time:"$ISO8601_VALUE",keyn:vn}...}
Here's my props.conf on indexer node (I don't use forwarders yet):
/opt/splunk/etc/apps/search/local/props.conf:
*
[usr_event]
DATETIME_CONFIG =
INDEXED_EXTRACTIONS = json
NO_BINARY_CHECK = true
TIMESTAMP_FIELDS = event.time
TIME_FORMAT = %Y-%m-%dT%H:%M:%S
TZ = UTC
category = Custom
pulldown_type = 1
KV_MODE = json
SHOULD_LINEMERGE = false
disabled = false*
When I use this source type in a file, I can get timestamp extracted correctly in preview but when I use the same source type in TCP input, I my custom timestamp setting ignored so I get the time stamp at the time of loading.
I prefer TCP as it makes it a lot easier to stream back-fill input for historical data as well as for daily ETL.
Is there something wrong with my settings?
Thanks,
David
You are configuring JSON twice; use either INDEXED_EXTRACTIONS = json
or KV_MODE = json
, but NOT both. In your case, keep the former and remove the latter. Make sure that this inputs.conf
file gets sent to the FORWARDER (Yes, not just to the Indexer) and that all splunk instances there are restarted.
I tried it again yesterday, no luck. Perhaps splunk is not recognizing nested field, event.time. I also have logging event id timestamp that looks like 20151026013223432432432... I tried cutting strptime timestamp portion but so far it didn't work either.
Update:
Ended up using TIME_PREFIX instead of TIMESTAMP_FIELDS:
TIME_PREFIX=\"logEventId\":\"
TIME_FORMAT = %Y%m%d%H%M%S%3N'
It seems to work consistently, will update this thread if it doesn't 🙂
Thanks,
David
Check your fields and find the name that splunk has given the Timestamp field; it must not be event.time
.
Thank you, TIME_PREFIX setting is working for me so far.
Thank you, will try it first thing Monday.