Monitoring Splunk

Timestamp logic/config on forwarder or indexer?

Communicator

I looked at the report for timestamping errors and found a fair amount of errors. I’ve been following the Splunk blogs and saw Vi’s http://blogs.splunk.com/2010/03/02/guess-what-time-it-is/

We absolutely should turn off timestamp checking and just go with Splunk’s time because the logs in question actually capture the output from the console (rootsh specifically). This of course will contain tons of timestamps well in the past.

Do I update props.conf on the Indexer or the forwarder? We are using light forwarders so I’m not sure if the timestamp is extracted on the indexers or the forwarders.

Tags (2)
0 Karma
1 Solution

Splunk Employee
Splunk Employee

The time is extracted where the log data is parsed. This is on the indexer if you are using a lightweight forwarder, and on a forwarder if you are using a heavy forwarder. (Parsing the data is the essential difference between a light and heavy forwarding.)

Update: I wrote this which goes into a more thorough explanation.

View solution in original post

Splunk Employee
Splunk Employee

The timestamp is applied during parsing. A light forwarder will not parse. A light forwarder adds just a bit of metadata regarding source of event before sending it along.

If you want to use Splunk time for a specific data source you would want to modify the props.conf file in the local directory of the system doing the parsing.

Below is an example I stole from $SPLUNK_HOME/etc/system/README/props.conf.example


# The following example turns off DATETIME_CONFIG (which can speed up indexing) from any path
# that ends in /mylogs/*.log.

[source::.../mylogs/*.log]
DATETIME_CONFIG = NONE

0 Karma

Splunk Employee
Splunk Employee

The time is extracted where the log data is parsed. This is on the indexer if you are using a lightweight forwarder, and on a forwarder if you are using a heavy forwarder. (Parsing the data is the essential difference between a light and heavy forwarding.)

Update: I wrote this which goes into a more thorough explanation.

View solution in original post

Splunk Employee
Splunk Employee

Indexer CPU maxing out is not common for indexing of data. It is common when searching though. The solution is either more indexers or much faster indexer disks.

0 Karma

Communicator

Also, the flow from LWF=>Heavy Forwarder->Indexer is intriguing. We are having indexer CPU performance issues...in your experience, is this common?

0 Karma

Communicator

Thanks! That post is very helpful.

0 Karma