Getting Data In

How to access "_indextime" to calculate "latency" in "metrics index"?

woodcock
Esteemed Legend

For an events index, I would do something like this:

|tstats max(_indextime) AS indextime
WHERE index=_* OR index=*
BY index sourcetype _time
| stats avg(eval(indextime - _time)) AS latency BY index sourcetype
| fieldformat latency = tostring(latency, "duration")
| sort 0 - latency

I know that _indextime must be a field in a metrics index, too, but accessing time fields is complicated as evidenced by the special earliest_time() and latest_time() functions. I have tried everything to access both _time and _indextime in a metrics index (both mstats and mcatalog) and have failed. Is there any way?

1 Solution

woodcock
Esteemed Legend

Support confirmed both of these:

1: Sorry to be the bearer of bad news here, but we don’t store _indextime for metric indexes, and this means that currently there isn’t a way to measure metric index latency. We might be able to make per-source latency numbers available in metrics.log in the future – stay tuned.
2: Thanks for this feedback. You are correct, the walklex command only applies to events not metrics. It does mention the tsidx but I don’t know if many users will understand that tsidx applies to event indexes. I’ll make it explicit.

Therefore the only other options are realtime which, if you have a good admin has been thoroughly disabled or something like this:
Create a scheduled search with a cron of * * * * * to run every minute that looks like this:

|mstats latest_time(_value) AS _time WHERE index="*" BY host
| stats max(_time) AS _time BY host
| eval now = now()
| eval latency = now - time
| fieldformat latency = tostring(latency, "duration")
| fieldformat now = strftime(now, "%F %T")
| where latency > 60
| outputlookup append=t YOUR_LATENCY_TEST_FILE.csv

Then you can check later with this:

|inputlookup YOUR_LATENCY_TEST_FILE.csv
| stats max(now) AS now BY asset _time
| eval latency = now - _time
| rename _time AS time
| sort 0 - latency
| fieldformat time = strftime(time, "%c")
| fieldformat now = strftime(now, "%c")
| eval latency = tostring(latency, "duration")
| stats list(*) AS * BY asset

View solution in original post

woodcock
Esteemed Legend

Support confirmed both of these:

1: Sorry to be the bearer of bad news here, but we don’t store _indextime for metric indexes, and this means that currently there isn’t a way to measure metric index latency. We might be able to make per-source latency numbers available in metrics.log in the future – stay tuned.
2: Thanks for this feedback. You are correct, the walklex command only applies to events not metrics. It does mention the tsidx but I don’t know if many users will understand that tsidx applies to event indexes. I’ll make it explicit.

Therefore the only other options are realtime which, if you have a good admin has been thoroughly disabled or something like this:
Create a scheduled search with a cron of * * * * * to run every minute that looks like this:

|mstats latest_time(_value) AS _time WHERE index="*" BY host
| stats max(_time) AS _time BY host
| eval now = now()
| eval latency = now - time
| fieldformat latency = tostring(latency, "duration")
| fieldformat now = strftime(now, "%F %T")
| where latency > 60
| outputlookup append=t YOUR_LATENCY_TEST_FILE.csv

Then you can check later with this:

|inputlookup YOUR_LATENCY_TEST_FILE.csv
| stats max(now) AS now BY asset _time
| eval latency = now - _time
| rename _time AS time
| sort 0 - latency
| fieldformat time = strftime(time, "%c")
| fieldformat now = strftime(now, "%c")
| eval latency = tostring(latency, "duration")
| stats list(*) AS * BY asset

woodcock
Esteemed Legend

I also tried to dig into the guts with walklex but apparently that command does not support metrics index.

Get Updates on the Splunk Community!

Stay Connected: Your Guide to November Tech Talks, Office Hours, and Webinars!

🍂 Fall into November with a fresh lineup of Community Office Hours, Tech Talks, and Webinars we’ve ...

Transform your security operations with Splunk Enterprise Security

Hi Splunk Community, Splunk Platform has set a great foundation for your security operations. With the ...

Splunk Admins and App Developers | Earn a $35 gift card!

Splunk, in collaboration with ESG (Enterprise Strategy Group) by TechTarget, is excited to announce a ...