Getting Data In

Metrics Index - How to get metric_searchtime field value in search result

shadabgaur
New Member

I uploaded a csv file in metric index. I can see index's data there is no issue in that.

My query is:
I want to get metric_timestamp in search query to perform some action on that. Is it possible to use "metric_timestamp" field in mstat commands? I always get error whenever I tried to apply any statistical function (i.e. latest etc) on "metric_timestamp" or used it as a dimension field (where index=xyz by metric_timestamp).

mcatalog just displays the schema, not the values from "metric_timestamp" field.

Any help is greatly appreciated. Thanks!

0 Karma
1 Solution

thaggie_splunk
Splunk Employee
Splunk Employee

For mstats to project by time you need to give it a span, so queries of the form:

| mstats avg("abc") WHERE index="xyz" span=10s

It's not possible to aggregate time, so you can't do things like latest("_time").

View solution in original post

0 Karma

thaggie_splunk
Splunk Employee
Splunk Employee

For mstats to project by time you need to give it a span, so queries of the form:

| mstats avg("abc") WHERE index="xyz" span=10s

It's not possible to aggregate time, so you can't do things like latest("_time").

0 Karma

shadabgaur
New Member

Thanks for quick response Thaggie. So, we cannot use this field "metric_timestamp" in anywhere in our search except spanning the chart based on it.

0 Karma

thaggie_splunk
Splunk Employee
Splunk Employee

That's right

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...