Getting Data In

How to access "_indextime" to calculate "latency" in "metrics index"?

woodcock
Esteemed Legend

For an events index, I would do something like this:

|tstats max(_indextime) AS indextime
WHERE index=_* OR index=*
BY index sourcetype _time
| stats avg(eval(indextime - _time)) AS latency BY index sourcetype
| fieldformat latency = tostring(latency, "duration")
| sort 0 - latency

I know that _indextime must be a field in a metrics index, too, but accessing time fields is complicated as evidenced by the special earliest_time() and latest_time() functions. I have tried everything to access both _time and _indextime in a metrics index (both mstats and mcatalog) and have failed. Is there any way?

1 Solution

woodcock
Esteemed Legend

Support confirmed both of these:

1: Sorry to be the bearer of bad news here, but we don’t store _indextime for metric indexes, and this means that currently there isn’t a way to measure metric index latency. We might be able to make per-source latency numbers available in metrics.log in the future – stay tuned.
2: Thanks for this feedback. You are correct, the walklex command only applies to events not metrics. It does mention the tsidx but I don’t know if many users will understand that tsidx applies to event indexes. I’ll make it explicit.

Therefore the only other options are realtime which, if you have a good admin has been thoroughly disabled or something like this:
Create a scheduled search with a cron of * * * * * to run every minute that looks like this:

|mstats latest_time(_value) AS _time WHERE index="*" BY host
| stats max(_time) AS _time BY host
| eval now = now()
| eval latency = now - time
| fieldformat latency = tostring(latency, "duration")
| fieldformat now = strftime(now, "%F %T")
| where latency > 60
| outputlookup append=t YOUR_LATENCY_TEST_FILE.csv

Then you can check later with this:

|inputlookup YOUR_LATENCY_TEST_FILE.csv
| stats max(now) AS now BY asset _time
| eval latency = now - _time
| rename _time AS time
| sort 0 - latency
| fieldformat time = strftime(time, "%c")
| fieldformat now = strftime(now, "%c")
| eval latency = tostring(latency, "duration")
| stats list(*) AS * BY asset

View solution in original post

woodcock
Esteemed Legend

Support confirmed both of these:

1: Sorry to be the bearer of bad news here, but we don’t store _indextime for metric indexes, and this means that currently there isn’t a way to measure metric index latency. We might be able to make per-source latency numbers available in metrics.log in the future – stay tuned.
2: Thanks for this feedback. You are correct, the walklex command only applies to events not metrics. It does mention the tsidx but I don’t know if many users will understand that tsidx applies to event indexes. I’ll make it explicit.

Therefore the only other options are realtime which, if you have a good admin has been thoroughly disabled or something like this:
Create a scheduled search with a cron of * * * * * to run every minute that looks like this:

|mstats latest_time(_value) AS _time WHERE index="*" BY host
| stats max(_time) AS _time BY host
| eval now = now()
| eval latency = now - time
| fieldformat latency = tostring(latency, "duration")
| fieldformat now = strftime(now, "%F %T")
| where latency > 60
| outputlookup append=t YOUR_LATENCY_TEST_FILE.csv

Then you can check later with this:

|inputlookup YOUR_LATENCY_TEST_FILE.csv
| stats max(now) AS now BY asset _time
| eval latency = now - _time
| rename _time AS time
| sort 0 - latency
| fieldformat time = strftime(time, "%c")
| fieldformat now = strftime(now, "%c")
| eval latency = tostring(latency, "duration")
| stats list(*) AS * BY asset

woodcock
Esteemed Legend

I also tried to dig into the guts with walklex but apparently that command does not support metrics index.

Get Updates on the Splunk Community!

Index This | I’m short for "configuration file.” What am I?

May 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with a Special ...

New Articles from Academic Learning Partners, Help Expand Lantern’s Use Case Library, ...

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Your Guide to SPL2 at .conf24!

So, you’re headed to .conf24? You’re in for a good time. Las Vegas weather is just *chef’s kiss* beautiful in ...