Our Sales Engineer told us that the Splunk json parser requires several specific things in the json document, in order to be interpreted as json. What are they?
We would like to avoid hard-coded solutions such as How do we assign each JSON document to a distinct event?
Why not just apply base configs to your JSON file and have it break correctly rather than trying to format the log to Splunk?
If you let Splunk try to figure out the linebreaking, it will add additional overhead to your indexing and slow it down.
Adding this will give you correct linebreaking and timestamping along with avoiding the merging pipeline which increases your indexing speed
[sourcetype]
TIME_PREFIX =
TIME_FORMAT =
LINE_BREAKER =
SHOULD_LINEMERGE = false
MAX_TIMESTAMP_LOOKAHEAD =
TRUNCATE =
Why not just apply base configs to your JSON file and have it break correctly rather than trying to format the log to Splunk?
If you let Splunk try to figure out the linebreaking, it will add additional overhead to your indexing and slow it down.
Adding this will give you correct linebreaking and timestamping along with avoiding the merging pipeline which increases your indexing speed
[sourcetype]
TIME_PREFIX =
TIME_FORMAT =
LINE_BREAKER =
SHOULD_LINEMERGE = false
MAX_TIMESTAMP_LOOKAHEAD =
TRUNCATE =
@skoelpin, good question.
We have teams that can form their json logs per the Splunk's needs. So, we are lucky in this sense.
We were told by the Sales Engineer that as long as it's proper JSON, all we need to do is set -
INDEXED_EXTRACTIONS = json
category = Structured
in props.conf
.
For the record, the predefined _json sourcetype has these two defined config variables -
INDEXED_EXTRACTIONS = json
category = Structured
This solution works!!!
Your sales engineer is partially right, but you should ALWAYS apply base configs to lessen the indexer load when indexing data. This is a big part of the SCC2 bootcamp
Much appreciated @skoelpin.