Getting Data In

Why are we getting two different timestamps in one event collecting Syslog from Cisco devices?

ej_head
Engager

I collect syslog from Cisco devices, splunk shows two different timestamps in one event:

Oct 14 14:59:01 x.x.x.x 282814: Oct 14 14:49:54.254: %IP_VFR-4-FRAG_TABLE_OVERFLOW: GigabitEthernet0/0.214: the fragment table has reached its maximum threshold 16
Oct 14 14:59:59 x.x.x.x 282822: Oct 14 14:50:29.078: %LINEPROTO-5-UPDOWN: Line protocol on Interface Virtual-Access5, changed state to down

If i restart splunk, difference between timestamps will be about 1 minute, then it became more, on some devices i saw difference about couple hours. Events apear at Splunk much later than on device. First i thought that first timestamp is time of receiving event, but on another syslog server this time not more than 5 seconds.

What is the first timestamp? And what about this latency?

1 Solution

bmunson_splunk
Splunk Employee
Splunk Employee

This behavior comes from the syslog service which may not be splunk. Splunk's best practice is to use a system based syslog receiver such as rsyslog, syslogNG, kiwi_Syslog, etc. Then get splunk to index the files they produce. Both the sender syslog service and receiver can insert extra timestamps as they pass information.

The delay can come from a number of reasons. The most common is log cacheing. To save disk writes, a server stores log messages in memory until it has enough to make it worthwhile writing to disk. It can also come from network delays, an overly busy syslog service etc.

I would definately use the second timestamp as it is more likely to be when the event happened. If all your syslog messages are doing this, In your props.conf add the following to make splunk look after the ip address for the timestamp.

[syslog]
TIME_PREFIX = \d+\.\d+\.\d+\.\d+ \d+:

View solution in original post

bmunson_splunk
Splunk Employee
Splunk Employee

This behavior comes from the syslog service which may not be splunk. Splunk's best practice is to use a system based syslog receiver such as rsyslog, syslogNG, kiwi_Syslog, etc. Then get splunk to index the files they produce. Both the sender syslog service and receiver can insert extra timestamps as they pass information.

The delay can come from a number of reasons. The most common is log cacheing. To save disk writes, a server stores log messages in memory until it has enough to make it worthwhile writing to disk. It can also come from network delays, an overly busy syslog service etc.

I would definately use the second timestamp as it is more likely to be when the event happened. If all your syslog messages are doing this, In your props.conf add the following to make splunk look after the ip address for the timestamp.

[syslog]
TIME_PREFIX = \d+\.\d+\.\d+\.\d+ \d+:

jrodman
Splunk Employee
Splunk Employee

If this data arrives via UDP, it's possible to configure splunk to put a timestamp on the front (traditional for a receiving syslogd). It's also possible that another syslogd in the path is appending the additional timestamp.

However with a 10 minute gap, as you mention it seems like there is some indexing throughput problem which could be caused by any number of things.

0 Karma
Get Updates on the Splunk Community!

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...