It's a very messy environment and I think that client is challenging us so here goes.
The client has many devices all around the globe that are not configured with the correct time zones so we can not rely on the time zone in the logs. Those logs are being saved to files by syslog, into folders named by the source host.
We are supposed to figure out a way to get Splunk to figure out the time zones of the hosts and adjust the time stamp.
The best thing is that the client doesn't even have a list of all their devices and locations/time zones.
Ideally we were thinking of using the logs _time field and the _indextime field to calculate the difference between them, round them up to the hour and thus calculate what time zones those devices could be in but then we would like Splunk to adjust the _time at index time. I don' think that is possible though so I wanted to ask if anyone could come up with a good solution.
This is a complex issue but there is a way round it although is is a bit manual to set up.
First I hope you are following best practices and are using an external syslog receiver (kiwi syslog, rsyslog, syslogng etc.) and not taking it directly into splunk. If not, please read this http://wiki.splunk.com/index.php?title=Community:Best_Practice_For_Configuring_Syslog_Input&r=search...
Set up your syslog receiver to save each host to a separate folder like below.
log/syslog
|________server1
| |______server.log
|
|________server2
| |______server.log
...
Then setup your inputs.conf to collect it. Something like this.
[monitor:///log/syslog]
index = mysyslogIndex
host_segment = 3
sourcetype = syslog
The secret to time zones is the props.conf on the forwarder which should look something like this.
[source::/log/syslog/server1/*]
TZ = US/EASTERN
[source::/log/syslog/server2/*]
TZ = GMT
[source::/log/syslog/server3/*]
TZ = UTC +1
You will have to set a stanza with a TZ for every server that isn't in the same TZ as your server, but this is a one off setup and should work going forward. Remember both of these files belong on the forwarder.
I've seen this issue multiple times, and there is no really good solution for fixing the problem at index time.
Yes, it would be possible to develop a lookup table that indicates what time zone a particular machine is in. However, that would never work backwards in time to fix your prior data.
If I recall correctly, in every search that would return any of the suspect records, we used 3600*round( (_indextime-_time)/3600,0)
as the offset to add to _time
to get the real _time
. This basically assumed that every event would be processed in a certain amount of time... say 5 to 10 minutes, and it allowed for up to a 30 minute delay.
You can move the set-point for the rounding and the offset, based on your organizational assumptions. Just add or subtract seconds inside of the parenthesis around (_indextime - _time )
.
I'm just going to leave the question open for now to get some more attention and see how other people dealt with this issue and will eventually vote for an answer.
Yes, that is pretty similar to what we figured out as the most appropriate solution.
We were also thinking of getting splunk to extract _time as the timestamp on the event of the original device (not the syslog timestamp that is added , as we saw some delays there) then calculate the difference between _indextime and _time and round it (probably rounding to 30 min as I think the client has devices in India where then have half an hour shift time zones.
What we then thought was to export the list of hosts and their time offset into a CSV and based on it add the following stanza to props.conf:
[host::Host_name_1]
TZ = timezone that matches the offset.
That way Splunk would adjust the _time at index time but as you said , we could also create a new field that would be equal to _time + calculated offset and do any timecharts based on that and not the regular timestamp.
Hi,
It is possible to adjust time zones and maybe tinker with the times after indexing as well. but what we would like to see first is a sample of your logs with multiple devices spread across various time zones if possible, it is very hard to see what can be done without having sample data
like @Sukisen1981 here mentioned, sample data will be ideal.
for me, the question is not entirely clear. are the syslog devices configured correctly? if that is the case, you can figure out pretty fast what timezone each of the devices that reports to syslog is.
here is how:
make sure your inputs.conf are set to have the splunk timestamp as indextime.
syslog server will add its own timestamp to the events its receiving so now you are supposed to have 3 timestamps: (a) _time (which is equal to _indextime since you configured splunk that way) (b) syslog timestamp (in log) (c) device timestamp (also in log) compare it however you find correct to discover the original timezone
hope it helps
I'll try to provide some anonymized samples.
Logs below:
Oracle Audit logs:
Apr 10 00:00:02 XXXXXXX crit Oracle Audit[56361294]: LENGTH: "250" SESSIONID:[8] "24754003" ENTRYID:[1] "1" STATEMENT:[1] "1" USERID:[10] "XXXXXXX" USERHOST:[8] "XXXXXX" ACTION:[3] "100" RETURNCODE:[1] "0" COMMENT$TEXT:[20] "Authenticated by: OS" OS$USERID:[6] "XXXXXXX" DBID:[10] "1731194098" PRIV$USED:[1] "5"
SAP db data:
Apr 11 16:35:41 XXXXXXXXXX HDB_SYSTEMDB[41189]: 2018-04-11 16:35:41;nameserver;hananode01;DGH;00;30001;SYSTEMDB;;;0;0;Security_Audit;INFO;SELECT;SYSTEM;SYS;M_CONNECTIONS;;;;;SUCCESSFUL;;;;;;;SELECT CONNECTION_ID, STATEMENT_ID, START_MVCC_TIMESTAMP FROM M_ACTIVE_STATEMENTS WHERE (STATEMENT_STATUS = 'ACTIVE' OR STATEMENT_STATUS = 'SUSPENDED') AND START_MVCC_TIMESTAMP > 0 AND SECONDS_BETWEEN(LAST_EXECUTED_TIME, CURRENT_TIMESTAMP) >= ? AND HOST = ? AND PORT = ?,(10, 'XXXXXXX', 30001);125219;;;;;;
Cisco logs:
Apr 11 16:21:09 XXXXXXXXXXX: 2018 Apr 11 14:21:09 UTC: %AUTHPRIV-6-SYSTEM_MSG: START: ssh pid=22378 from=::ffff:10.167.218.233 - dcos-xinetd[3253]
Windows Snare:
Apr 9 02:00:01 XXXXXXXXXXXXXXXXX MSWinEventLog 4 Security 4220128 Mon Apr 09 02:00:00 2018 4688 Microsoft-Windows-Security-Auditing -\- N/A Success Audit XXXXXXXXXXXXXXXXXX Process Creation A new process has been created. Creator Subject: Security ID: S-1-5-18 Account Name: XXXXXXXXXXXXXX Account Domain: XXXXXXXXX Logon ID: 0x3E7 Target Subject: Security ID: S-1-0-0 Account Name: - Account Domain: - Logon ID: 0x0 Process Information: New Process ID: 0x6b1c New Process Name: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe Token Elevation Type: TokenElevationTypeDefault (1) Creator Process ID: 0xad54 Process Command Line: Token Elevation Type indicates the type of token that was assigned to the new process in accordance with User Account Control policy. Type 1 is a full token with no privileges removed or groups disabled. A full token is only used if User Account Control is disabled or if the user is the built-in Administrator account or a service account. Type 2 is an elevated token with no privileges removed or groups disabled. An elevated token is used when User Account Control is enabled and the user chooses to start the program using Run as administrator. An elevated token is also used when an application is configured to always require administrative privilege or to always require maximum privilege, and the user is a member of the Administrators group. Type 3 is a limited token with administrative privileges removed and administrative groups disabled. The limited token is used when User Account Control is enabled, the application does not require administrative privilege, and the user does not choose to start the program using Run as administrator. 4220127