Splunk Search

Date_Hour Question

_gkollias
SplunkTrust
SplunkTrust

Hi All,

I have a search which I am adding date_hour to a table:

...| stats count as 1week_ago_count by qos, date_hour

When the data is on the table, the date and time show as the following:

09/10/2013 00:00:00:000

Is there a way to clean up those zeros; either getting rid of them or adding actual time?

Thanks!

Tags (1)
0 Karma
1 Solution

zeroactive
Path Finder

From what I understand, "date_hour" should just contain the # of the hour for the event, ie if the event occured at 05:23:18, "date_hour" should contain "5" (maybe "05").

You could get the same result by doing the following:

... | eval mytime=_time | convert timeformat="%H ctime(mytime) | stats count as 1week_ago_count, avg(mytime) by qos

That will copy the event _time field value into "mytime" which you can then format to include just the "hour" part of the time value (%H).

"stats" wants to do calculations on fields, so I provided the "average" value for mytime in the query above. Because this you may want to use "table" instead of "stats".

View solution in original post

0 Karma

lguinn2
Legend

I'm going to propose a new answer, because I got lost in the other thread.

I never use the date_* fields. For one thing, they don't always exist. More importantly, they are derived directly from the text in the event, so they don't consider timezone!

Whenever I want to manipulate time, I compute what I need. I usually use thestrftime function, although I guess convert might work as well.

I would write your search as

index=esb_hourly_summary_prf source=esb_hourly_summary_prf earliest=-7d@w0 latest=@w0 
| localop 
| lookup host_info_lookup host as orig_host 
| eval hour = strftime(_time,"%H")
| stats count as 1week_ago_count, date_hour by qos hour

If what you want is to preserve the entire date, but truncate the minutes and seconds, try this

index=esb_hourly_summary_prf source=esb_hourly_summary_prf earliest=-7d@w0 latest=@w0 
| localop 
| lookup host_info_lookup host as orig_host 
| bucket _time span=1h
| stats count as 1week_ago_count, date_hour by qos _time

Or even

index=esb_hourly_summary_prf source=esb_hourly_summary_prf earliest=-7d@w0 latest=@w0 
| localop 
| lookup host_info_lookup host as orig_host 
| timechart span=1h count as 1week_ago_count, date_hour by qos

_gkollias
SplunkTrust
SplunkTrust

I think the original question I had actually has nothing to do with date_hour, etc.. but with info_min_time and info_max_time..That would be what is linked to the date and time with the 00:00:00:000 I have shown above. Either way your answers have been very helpful!

0 Karma

zeroactive
Path Finder

From what I understand, "date_hour" should just contain the # of the hour for the event, ie if the event occured at 05:23:18, "date_hour" should contain "5" (maybe "05").

You could get the same result by doing the following:

... | eval mytime=_time | convert timeformat="%H ctime(mytime) | stats count as 1week_ago_count, avg(mytime) by qos

That will copy the event _time field value into "mytime" which you can then format to include just the "hour" part of the time value (%H).

"stats" wants to do calculations on fields, so I provided the "average" value for mytime in the query above. Because this you may want to use "table" instead of "stats".

0 Karma

zeroactive
Path Finder

Thanks Kristian. I updated my answer to focus on the field usage in the stats command. It was a bad example. I was trying to break down what I thought was wrong into different parts.

0 Karma

zeroactive
Path Finder

Note my comment at the end of the answer: ""stats" wants to do calculations on fields, so I provided the "average" value for mytime in the query above. Because this you may want to use "table" instead of "stats"."

0 Karma

_gkollias
SplunkTrust
SplunkTrust

Sorry, The I looks like a pipe. The " I use appendcols for 2w ago and 3w ago means follow the same search as index=esb_hourly_summary_prf source=esb_hourly_summary_prf
earliest=-7d@w0 latest=@w0 | localop
| lookup host_info_lookup host as orig_host
| stats count as 1week_ago_count, date_hour by qos.

Maybe the sort qos, num((date_hour) is what's making it screwy?

0 Karma

_gkollias
SplunkTrust
SplunkTrust

I tried this and get a red bar Error in 'stats' command: The argument 'date_hour' is invalid.

Maybe showing more of my query will help:
index=esb_hourly_summary_prf source=esb_hourly_summary_prf
earliest=-7d@w0 latest=@w0 | localop
| lookup host_info_lookup host as orig_host
| stats count as 1week_ago_count, date_hour by qos
I use appendcols for 2w ago and 3w ago -
| stats
first(info_min_time) as info_min_time,
first(info_max_time) as info_max_time,
first(1week_ago_count) as 1week_ago_count,
| addinfo
| convert ctime(info_min_time), ctime(info_max_time)
| sort qos, num(date_hour)

0 Karma
Get Updates on the Splunk Community!

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...