Getting Data In

Why are we getting two different timestamps in one event collecting Syslog from Cisco devices?

ej_head
Engager

I collect syslog from Cisco devices, splunk shows two different timestamps in one event:

Oct 14 14:59:01 x.x.x.x 282814: Oct 14 14:49:54.254: %IP_VFR-4-FRAG_TABLE_OVERFLOW: GigabitEthernet0/0.214: the fragment table has reached its maximum threshold 16
Oct 14 14:59:59 x.x.x.x 282822: Oct 14 14:50:29.078: %LINEPROTO-5-UPDOWN: Line protocol on Interface Virtual-Access5, changed state to down

If i restart splunk, difference between timestamps will be about 1 minute, then it became more, on some devices i saw difference about couple hours. Events apear at Splunk much later than on device. First i thought that first timestamp is time of receiving event, but on another syslog server this time not more than 5 seconds.

What is the first timestamp? And what about this latency?

1 Solution

bmunson_splunk
Splunk Employee
Splunk Employee

This behavior comes from the syslog service which may not be splunk. Splunk's best practice is to use a system based syslog receiver such as rsyslog, syslogNG, kiwi_Syslog, etc. Then get splunk to index the files they produce. Both the sender syslog service and receiver can insert extra timestamps as they pass information.

The delay can come from a number of reasons. The most common is log cacheing. To save disk writes, a server stores log messages in memory until it has enough to make it worthwhile writing to disk. It can also come from network delays, an overly busy syslog service etc.

I would definately use the second timestamp as it is more likely to be when the event happened. If all your syslog messages are doing this, In your props.conf add the following to make splunk look after the ip address for the timestamp.

[syslog]
TIME_PREFIX = \d+\.\d+\.\d+\.\d+ \d+:

View solution in original post

bmunson_splunk
Splunk Employee
Splunk Employee

This behavior comes from the syslog service which may not be splunk. Splunk's best practice is to use a system based syslog receiver such as rsyslog, syslogNG, kiwi_Syslog, etc. Then get splunk to index the files they produce. Both the sender syslog service and receiver can insert extra timestamps as they pass information.

The delay can come from a number of reasons. The most common is log cacheing. To save disk writes, a server stores log messages in memory until it has enough to make it worthwhile writing to disk. It can also come from network delays, an overly busy syslog service etc.

I would definately use the second timestamp as it is more likely to be when the event happened. If all your syslog messages are doing this, In your props.conf add the following to make splunk look after the ip address for the timestamp.

[syslog]
TIME_PREFIX = \d+\.\d+\.\d+\.\d+ \d+:

jrodman
Splunk Employee
Splunk Employee

If this data arrives via UDP, it's possible to configure splunk to put a timestamp on the front (traditional for a receiving syslogd). It's also possible that another syslogd in the path is appending the additional timestamp.

However with a 10 minute gap, as you mention it seems like there is some indexing throughput problem which could be caused by any number of things.

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Share Your Feedback: On Admin Config Service (ACS)!

Help Us Build a Better Admin Config Service Experience (ACS)   We Want Your Feedback on Admin Config Service ...

Build the Future of Agentic AI: Join the Splunk Agentic Ops Hackathon

AI is changing how teams investigate incidents, detect threats, automate workflows, and build intelligent ...

[Puzzles] Solve, Learn, Repeat: Character substitutions with Regular Expressions

This challenge was first posted on Slack #puzzles channelFor BORE at .conf23, we had a puzzle question which ...