Monitoring Splunk

Timestamp logic/config on forwarder or indexer?

oreoshake
Communicator

I looked at the report for timestamping errors and found a fair amount of errors. I’ve been following the Splunk blogs and saw Vi’s http://blogs.splunk.com/2010/03/02/guess-what-time-it-is/

We absolutely should turn off timestamp checking and just go with Splunk’s time because the logs in question actually capture the output from the console (rootsh specifically). This of course will contain tons of timestamps well in the past.

Do I update props.conf on the Indexer or the forwarder? We are using light forwarders so I’m not sure if the timestamp is extracted on the indexers or the forwarders.

Tags (2)
0 Karma
1 Solution

gkanapathy
Splunk Employee
Splunk Employee

The time is extracted where the log data is parsed. This is on the indexer if you are using a lightweight forwarder, and on a forwarder if you are using a heavy forwarder. (Parsing the data is the essential difference between a light and heavy forwarding.)

Update: I wrote this which goes into a more thorough explanation.

View solution in original post

bwooden
Splunk Employee
Splunk Employee

The timestamp is applied during parsing. A light forwarder will not parse. A light forwarder adds just a bit of metadata regarding source of event before sending it along.

If you want to use Splunk time for a specific data source you would want to modify the props.conf file in the local directory of the system doing the parsing.

Below is an example I stole from $SPLUNK_HOME/etc/system/README/props.conf.example


# The following example turns off DATETIME_CONFIG (which can speed up indexing) from any path
# that ends in /mylogs/*.log.

[source::.../mylogs/*.log]
DATETIME_CONFIG = NONE

0 Karma

gkanapathy
Splunk Employee
Splunk Employee

The time is extracted where the log data is parsed. This is on the indexer if you are using a lightweight forwarder, and on a forwarder if you are using a heavy forwarder. (Parsing the data is the essential difference between a light and heavy forwarding.)

Update: I wrote this which goes into a more thorough explanation.

gkanapathy
Splunk Employee
Splunk Employee

Indexer CPU maxing out is not common for indexing of data. It is common when searching though. The solution is either more indexers or much faster indexer disks.

0 Karma

oreoshake
Communicator

Also, the flow from LWF=>Heavy Forwarder->Indexer is intriguing. We are having indexer CPU performance issues...in your experience, is this common?

0 Karma

oreoshake
Communicator

Thanks! That post is very helpful.

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Community Content Calendar, September edition

Welcome to another insightful post from our Community Content Calendar! We're thrilled to continue bringing ...

Splunkbase Unveils New App Listing Management Public Preview

Splunkbase Unveils New App Listing Management Public PreviewWe're thrilled to announce the public preview of ...

Leveraging Automated Threat Analysis Across the Splunk Ecosystem

Are you leveraging automation to its fullest potential in your threat detection strategy?Our upcoming Security ...