Has anyone setup monitoring of ntpd stats? The problem I'm running into is that these log files have an unusual timestamp format, so I was wondering if anyone else has figured this out before.
I have two NTPd log files that I would like to monitor with splunk. We recently had some issues with our clocks getting out of sync, and so using splunk to more proactively monitor the NTP services would be ideal. Here are some sample events:
55365 184.755 0.005201000 -20.717 0.000455463 0.153665 6 55365 37381.756 0.000415000 -17.188 0.000230239 0.007782 7 55365 49826.825 -0.031047000 -16.996 0.011537315 0.059551 10 55365 52986.926 0.000128000 -16.451 0.001437442 0.067062 7
55365 52995.979 22.214.171.124 9314 -0.003170778 0.074376426 0.082986177 0.033217419 55365 53045.904 127.127.1.0 9014 0.000000000 0.000000000 0.000000000 0.000000954 55365 53047.023 126.96.36.199 9414 -0.000126195 0.079608226 0.002956711 0.010910166 55365 53049.961 188.8.131.52 9614 0.001047601 0.021774658 0.004612503 0.006862981
Base on some docs I found online, it looks like this is the order of the fields for each file:
loopstats: day, second, offset, drift compensation, estimated error, stability, polling interval peerstats: day, second, address, status, offset, delay, dispersion, skew (variance)
TIME_FORMAT support this kind of notion of splitting the day and seconds components like this?
I've been able to deterermine that the
day field is a a Modified Julian Day (MJD), and the seconds field is the number of seconds past midnight. I can get the correct timestamp if I use the following python code (and the
def convert_timestamp(day, seconds): from mx.DateTime import DateTimeFromMJD, DateTimeDelta day = int(day) seconds = float(seconds) timestamp = DateTimeFromMJD(day) + DateTimeDelta(0,0,0, seconds) return timestamp.strftime("%Y-%m-%d %H:%M:%S") + (".%03d" % (divmod(timestamp.second,1) * 1000))
As a work around, I've written a script (using the python function above) to reformat the NTP day/seconds values into a more traditional timestamp format. Hopefully someday splunk will support this type of custom time format in a more native way.
For what it's worth, here are the field extractions for loopstats:
^(?P<date_mjd>\d+) (?P<sec_past_midnight>[0-9\.]+) (?P<clock_offset_sec>[0-9\.\-]+) (?P<frequency_offset_ppm>[0-9\.\-]+) (?P<jitter_sec>[0-9\.\-]+) (?P<wander_ppm>[0-9\.\-]+) (?P<clock_discipline>[0-9\.\-]+)
As far as I know, we're pretty closely tied to the strptime model.
We do handle timezone offsets, but not second offsets.
Please do file an enhancement request with support if this is important to you, especially if you can clue is into what sources use this format.