Splunk Search

Help with time conversion needed

damucka
Builder

Hello,

We have a chart in the dashboard, where the x-axis is the time. We defined a drilldown, where the $ts$ token should transmit the timestamp when the line chart is clicked.

The point is, that we need the $ts$ to be the unix format of the local time for the user and what comes is always the UTC.

How would I transform the $ts$ token to represent the local time of the user and be in the unix timestamp form?

Kind Regards,

Kamil

Labels (1)
Tags (1)
0 Karma

johnhuang
Motivator

What is the format of time in x axis -- is it stored as a string or as _time? How are you rendering the chart -- using timechart? What's the interval/bin?

0 Karma

damucka
Builder

...

| eval _time=strptime(TIMESTAMP,"%Y%m%d%H%M")
| timechart span=10m avg(AVG_WAITTIME) as "Avg. Wait Time ms",
avg(AVG_RESPTIME) as "Avg. Response Time ms"

then I take the $click.value$ from the chart, and it shifts the token by the user timezone.

0 Karma

johnhuang
Motivator

Try this:

 

| eval _time=strptime(TIMESTAMP."+00","%Y%m%d%H%M%z")
0 Karma

damucka
Builder

the %z makes the _time to be CET (my timezone/user timezone) and not UTC, which I do not want. Let me perhaps present the same issue on the below example of another dashboard:

damucka_0-1637833724644.png

The upper panel is in UTC and it should be so, because the system to be monitored in this case is in the UTC/GMT.  Then the time selection between 6pm - 8pm leads to the wrong display in the below panel (5pm - 7pm), because the time token gets automatically corrected by the time zone settings of the user, which is CET. What I assume happens here is the dashboard '"thinks" the clicked/chosen values are in CET and automatically subtracts 1h to make it UTC, which is wrong.

How would I correct it the best way? I assume I need to do some trick to the time tokens in code to clear them from the user timezone corrections and stay in UTC / original time of the monitored system.

0 Karma

johnhuang
Motivator

How did you validate the time zone of _time? If you're not sure how to validate:

| eval epoch_time = _time

and take the value and input it into an online unix/epoch time converter. 

0 Karma

damucka
Builder

I added your validation to the search of my fist panel, it is the UTC (see below,1637850000 = Realzeit: 25.11.2021 - 15:20:00 as per the converter):

| noop search_optimization=false
| dbxquery query="select to_char(SERIES_ROUND(timestamp, 'INTERVAL 5 MINUTE'), 'YYYY-MM-DD HH24:MI:SS') as TIMESTAMP, host, avg(cpu) as cpu from zspl_hsrvload_s where timestamp between '2021-11-24 15:00:00.0' and '2021-11-25 15:18:32.0' and sid_context_id = 'FA163EB49B521EEC91DD1B39CDB842DC' group by to_char(SERIES_ROUND(timestamp, 'INTERVAL 5 MINUTE'), 'YYYY-MM-DD HH24:MI:SS'), host" connection="HANA_S4_FRUN_FHP"
| eval _time=strptime(TIMESTAMP,"%Y-%m-%d %H:%M:%S")
| timechart span=5m avg(CPU) BY HOST
| eval epoch_time = _time

damucka_0-1637853798387.png

In the lower panel it gets subtracted and is one hour back - one can see it on the values passed to the SQL statement (1637846400 = 2021-11-25 14:20:00):

| noop search_optimization=false
| dbxquery query="select to_char(SERIES_ROUND(timestamp, 'INTERVAL 5 MINUTE'), 'YYYY-MM-DD HH24:MI:SS') as TIMESTAMP, host, avg(CPU) as METRIC from zspl_hsrvload_s where timestamp between '2021-11-24 14:00:00.0' and '2021-11-25 14:20:00.0' and datum between '20211124' and '20211125' and sid_context_id = 'FA163EB49B521EEC91DD1B39CDB842DC' group by to_char(SERIES_ROUND(timestamp, 'INTERVAL 5 MINUTE'), 'YYYY-MM-DD HH24:MI:SS'), host " connection="HANA_S4_FRUN_FHP"
| eval _time=strptime(TIMESTAMP,"%Y-%m-%d %H:%M:%S")
| timechart span=5m avg(METRIC) BY HOST
| eval epoch_time = _time

damucka_1-1637854299858.png

 

0 Karma

johnhuang
Motivator

The actual value stored in _time is (GMT/UTC Unix) is "1637846400 " which actually converts to 2021-11-25 13:20:00 GMT."

Your UI displays  _time by converting "1637846400"  + your local timezone to a human readable format.

 

0 Karma

PickleRick
SplunkTrust
SplunkTrust
$ date -d @1637850000
Thu Nov 25 15:20:00 CET 2021

Where do you have this UTC then?

If you cannot return a proper timestamp from the database (are you absolutely sure that you can't return a decently formatted timestamp?), and the actual timestamp returned from the database is from a different timezone than the one it's getting parsed into, you should - as I said before - append a timezone designation to it before parsing it - a bit ugly but should keep the timestamps consistent.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

As I said before - the timestamps are rendered in the timezone set for the logged in user. That's how it is. So you can't simply have "times displayed in UTC".

If the chart says "12:34:17", it's in your local timezone unless you did some nasty trick de facto making the underlying timestamp wrong.

There are several ideas concerning timezone manipulation open on ideas.splunk.com - you can second some of them - maybe splunk will implement some manipulation of timezones in the future.

So, if your user has EST, DST, CET or any other timezone set, your panel - if it's a simple stftime formatting - does not show UTC time from the real timestamp.

As I already wrote - you can't (at this time) render the time in another timezone. So if your string representation of given timestamp is "proper" for another timezone, the underlying timestamp must be wrong.

In other words - again - when the timestamp is being parsed from the original event, it's getting stored as a numerical value representing number of seconds from Epoch.

So if I have a date of "25 Nov 2021 10:52:22 CET", it's gonna get parsed and stored as 1637833942.

And if my user has his TZ set in preferences to CET, it's always gonna get rendered as 10:52:22.

If I want it to show a "human readable" representation that would correspond to the hour in another timezone, I'd have to add or substract appropriate number of seconds to that timestamp and call strftime without the timezone. But the "base" timestamp would point to a completely different point in time.

Sorry, but at that point there is no other way that I know of to "change" the timezone (which has its pros because it's easy to create badly written dashboards with confusing timezone settings leading people into thinking they are in another timezone). Time manipulation is tricky.

0 Karma

johnhuang
Motivator

Here's what I think the issue is. When you use strptime to converted a timestamp value to _time, it assumes that the timestamp reflects the users local timezone, and converts it into unix/epoch UTC. 

So if the timestamp was already UTC, strptime converts to unix using the user's local timezone offset to UTC.

 

 

 

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Well, that's understendable. You're doing strptime without parsing timezone (I suppose there is no info about it in the time string). So it's getting parsed with your user's timezone. If you want it to be parsed as UTC (or any other pre-set zone), add that info to the timestamp string and parse with strptime with format including the timezone.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

What do you mean by "unix format in the local time"? Unix timestamp has nothing to do with any timezone. It just represents a point in time - being the number of seconds that passed since Epoch. So if you have your unix timestamp you can render it to a human-readable form with any timezone specification but it's still the same point in time.

0 Karma

damucka
Builder

Yes, but this is precisely what I want:

- let the $ts$ stay as it is and represent the number of seconds passed, etc, etc. But then I need another variable, which I need to derive from the $ts$ and reflect the timestamp plus timezone shift, and it should be also in the unix timestamp format. This will be another timestamp then, fine.

How would I do such transformation / evaluate a new variable to get this?

I know, from the perspective of Splunk and log search it does not make much sense perhaps, but I am going to use this variable to select from my database using a unix timestamp format over the |dbxquery and for that purpose I need this to be like that. 

0 Karma

PickleRick
SplunkTrust
SplunkTrust

I still don't understand the concept of "timestamp plus timezone shift and have this as timestamp".

You can of course add/substract any number of seconds to shift the timestamp into the future or past but it will make the timestamp point to another point in time. So if your other system stores the events with the wrong timestamp, there seems to be something wrong there.

0 Karma

damucka
Builder

Okay, you have the point.

What I still do not understand is, why the dashboard chart accepts the _time (x-axis) being UTC and displays it also in UTC, but then when I click on the chart and want to drilldown using the token derived from the _time ($click.value$), it would automatically apply the timezone correction to this token, which in my case means subtracting one hour. This is then wrong. 

Is there any way to tell the $click.value$ to take the value as it is without any transformations?

0 Karma

johnhuang
Motivator

The dashboard is not displaying in UTC, it is displaying in the user's timezone. The offset was introduced during the striptime(TIMESTAMP) -> _time conversion.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

If you're operating with timestamps, there is no correction. You're having an integer representing number of seconds since Epoch and that's it. Splunk only renders (with fieldformat or by default if the field is called _time) it in your local timezone.

The behaviour you're describing suggests that there's something else going on "underneath" that manipulates date as string, losing TZ info in process.

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...