I currently have the following in my props.conf (real values were replaced by x's) which matches the names of all my ESXi hosts:
What I'm finding is that some events from the hosts are showing up under the proper time stamps (e.g. UTC adjusted to EST) and others are showing up under different timestamps (basically it appears that the time is being adjusted twice). For the following event, the raw event text is correct. It shows the proper timestamp adjustment, but the timestamp that Splunk lists for the event while searching is 9/13/12 8:36:20.000 AM which has been adjusted backwards by an additional 4 hours:
Sep 13 12:36:20 hostname.xxx.xxx Sep 13 16:36:20 Hostd: [2012-09-13 16:36:20.418 56682B90 info 'ha-eventmgr']
I'm also receiving events from the hosts like the following that appear to be part of a multiline event (they contain no date/time values in the event text), but these are showing up with the proper timestamps when searching (such as searching events from the past 15 minutes):
xx.xxx.xxx.xxx: icmp_seq=0 ttl=255 time=0.447 ms
Any ideas where I should look to figure out what is happening? I have one host that currently is coming in under its IP address instead of host name (and thus isn't matching the timezone rule in the props.conf) and the events are showing up with the proper timestamp.
If I remove the entry from the props.conf, the events that contain no date/time information end up receiving timestamps in the future (basically the UTC value).
How does your indexer receive the events?
Do you have configured a UDP input for this, or do you have some syslog service running somewhere, write the events to a file and then have a forwarder pick it up?
If so, what timezone is used on that host with the syslog service on it?
Which timezones are set on the other hosts involved (indexer and ESXi hosts)?
Also, look at your data in "real time, all time" and see at what time the events really come in.
To specify multiple esx host IPs (e.g 10.10.10.26,10.10.10.35,10.10.10.135) ,with the following props settings,
How can I prevent this setting applied to hosts that are not esx but in the same ip range?
The indexer receives the events directly via syslog on UDP pot 514.
All the hosts exist in the same data center as the Splunk server.
The raw events have the proper time information in them, but the time recorded by the indexer is what is incorrect.