I have json data that can vary greatly in size with the timestamp field coming at the end of each event. I'm able to parse all the timestamps correctly using the config TIME_PREFIX="timestamp":+ except for the events that are very large. My question is, in order to parse the timestamp for the very large events, do I need to add a MAX_TIMESTAMP_LOOKAHEAD? Or if I added a larger TRUNCATE would the TIME_PREFIX config still need the MAX_TIMESTAMP_LOOKAHEAD?
props.conf
[mysourcetype]
CHARSET=UTF-8
INDEXED_EXTRACTIONS=json
KV_MODE=none
LINE_BREAKER=([\r\n]+)
NO_BINARY_CHECK=true
SHOULD_LINEMERGE=true
category=Structured
description=JavaScript Object Notation format. For more information, visit http://json.org/
disabled=false
pulldown_type=true
TIME_PREFIX="timestamp":+
The MAX_TIMESTAMP_LOOKAHEAD
settings starts at TIME_PREFIX
so changing it won't help. It's likely you're running into your TRUNCATE
limit. Try increasing that after you make sure events are breaking correctly.
As @richgalloway rightly pointed, you should look into increasing the value of TRUNCATE (Defaults to 10,000). Splunk logs it's complain regarding the truncate issues in splunkd.log inside $SPLUNK_HOME/var/log/splunk. You can check it, to make sure you're facing the same issue.
The MAX_TIMESTAMP_LOOKAHEAD
settings starts at TIME_PREFIX
so changing it won't help. It's likely you're running into your TRUNCATE
limit. Try increasing that after you make sure events are breaking correctly.