Splunk Search

How to convert microseconds to date time unit in Splunk?

johnbernal553
New Member

I have a log event like this:

Timestamp: 1477292160453180 537

The number 1477292160453180 is the number of microseconds since the Epoch: 1970-01-01 00:00:00 +0000 (UTC). Which in this case comes out to January 1, 2016.

How do I perform this conversion from microseconds to a time unit in Splunk? Here's my current search:

* | rex field=_raw "Timestamp:\s(?<request_time>\d+)\s(?<response_time>\d+)" | eval stripped_time=strptime(request_time,"%s%3N")

But that's giving me a table of weirdly formatted stripped_time values.

0 Karma
1 Solution

javiergn
Super Champion

Are you sure 1477292160453180 is milliseconds and not microseconds?
In any case, try the following instead:

| rex field=_raw "Timestamp:\s(?<request_time_secs>\d+)(?<request_time_microsecs>\d{6})\s(?<response_time>\d+)" 
| eval request_time = toNumber(request_time_secs + "." + request_time_microsecs)
| fieldformat request_time = strftime(request_time, "%Y-%m-%d %H:%M:%S.%6N")
| eval _time = request_time

Example:

| stats count | fields - count
| eval request_time = 1477292160453180
| rex field=request_time "(?<request_time_secs>\d+)(?<request_time_microsecs>\d{6})"
| eval request_time = toNumber(request_time_secs + "." + request_time_microsecs)
| fieldformat request_time = strftime(request_time, "%Y-%m-%d %H:%M:%S.%6N")
| eval _time = request_time

Output (see picture below):

alt text

View solution in original post

javiergn
Super Champion

Are you sure 1477292160453180 is milliseconds and not microseconds?
In any case, try the following instead:

| rex field=_raw "Timestamp:\s(?<request_time_secs>\d+)(?<request_time_microsecs>\d{6})\s(?<response_time>\d+)" 
| eval request_time = toNumber(request_time_secs + "." + request_time_microsecs)
| fieldformat request_time = strftime(request_time, "%Y-%m-%d %H:%M:%S.%6N")
| eval _time = request_time

Example:

| stats count | fields - count
| eval request_time = 1477292160453180
| rex field=request_time "(?<request_time_secs>\d+)(?<request_time_microsecs>\d{6})"
| eval request_time = toNumber(request_time_secs + "." + request_time_microsecs)
| fieldformat request_time = strftime(request_time, "%Y-%m-%d %H:%M:%S.%6N")
| eval _time = request_time

Output (see picture below):

alt text

johnbernal553
New Member

This worked perfectly. And yes it was microseconds and not milliseconds. Thank you!

0 Karma

twinspop
Influencer
* | rex "Timestamp:\s(?<request_time>\d+)\s(?<response_time>\d+)" | eval stripped_time=strftime(request_time/1000,"%Y-%m-%d %T %z")

EDIT: Based on other comments:

 * | rex "Timestamp:\s(?<request_time>\d+)\s(?<response_time>\d+)" | eval _time=request_time/1000 | timechart ...

rjthibod
Champion

beat me to it 😉

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi johnbernal553,
what do you want as outputs: an epoch time or an human readable format?
with strptime you have an epochtime, to have an human readable time you must use strpftime.
Bye.
Giuseppe

0 Karma

johnbernal553
New Member

But this isn't formatting the x-axis properly. Basically I want to have the days as a continuous timeline on the x-axis and the response times on the y-axis, which is the 537 in this case. So that I can see the response times of my application in the past X days.

0 Karma

johnbernal553
New Member

I want to convert the milliseconds to a human readable date so that I can do ... | eval _time=stripped_time | timechart...

0 Karma

twinspop
Influencer

_time is not human readable, it's just that Splunk will make it so when you use it in a table. What you want is the _time field in epoch. ... | eval _time=request_time/1000 | timechart ...

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...