Is it possible to check the performance of the parsing and merging pipeline when making changes to props.conf for a particular source or sourcetype?
We currently only have line_breaker set for a particular source and would like to make recommendations to improve performance by including the props.conf changes that are part of Splunk's best practices like:
LINE_BREAKER
MAX_TIMESTAMP_LOOKAHEAD
TIME_PREFIX
TIME_FORMAT
SHOULD_LINEMERGE
TRUNCATE
I also looked at the MC/DMC under the Indexing tab but it wasn't much help.
I tried digging through metrics.log and ran a search like index=_internal host=indexer source="/opt/splunk/var/log/splunk/metrics.log" processor=linebreaker OR processor=aggregator
and I came up with some data I would possibly be interested in but there is no distinction of which source or sourcetype the info belongs to. I assume it's an aggregated number that includes all sources and sourcetypes.
07-22-2019 14:45:16.947 -0400 INFO Metrics - group=pipeline, name=parsing, processor=linebreaker, cpu_seconds=0, executes=97, cumulative_hits=1706601
I also ran a search for the source I was interested in (forescout) by running index=_internal host=indexer source="/opt/splunk/var/log/splunk/metrics.log" forescout and I came across logs from metrics.log that were part of the forescout index, source, and sourcetype. I saw groups like:
per_index_thruput
per_sourcetype_thruput
per_source_thruput
thruput
But I read from Splunk docs - Aboutmetricslog (I can't post links) that the thruput messages relate to the size of the "raw" items flowing through the data pipeline when it reaches the indexing pipeline, so this all takes place after the parsing and merging pipeline, so it's not of any help to me.
If anyone has any ideas, please let me know!
Thanks.
7/23 edit:
I came up with:
index=forescout
| eval latency=(_indextime-_time)
| eval day=strftime(_time,"%b/%d")
| stats avg(latency), min(latency), max(latency) BY day
It's not exactly what I'm looking for but I think it will provide some insight into what I am trying to achieve.
... View more