Getting Data In

How to access "_indextime" to calculate "latency" in "metrics index"?

woodcock
Esteemed Legend

For an events index, I would do something like this:

|tstats max(_indextime) AS indextime
WHERE index=_* OR index=*
BY index sourcetype _time
| stats avg(eval(indextime - _time)) AS latency BY index sourcetype
| fieldformat latency = tostring(latency, "duration")
| sort 0 - latency

I know that _indextime must be a field in a metrics index, too, but accessing time fields is complicated as evidenced by the special earliest_time() and latest_time() functions. I have tried everything to access both _time and _indextime in a metrics index (both mstats and mcatalog) and have failed. Is there any way?

1 Solution

woodcock
Esteemed Legend

Support confirmed both of these:

1: Sorry to be the bearer of bad news here, but we don’t store _indextime for metric indexes, and this means that currently there isn’t a way to measure metric index latency. We might be able to make per-source latency numbers available in metrics.log in the future – stay tuned.
2: Thanks for this feedback. You are correct, the walklex command only applies to events not metrics. It does mention the tsidx but I don’t know if many users will understand that tsidx applies to event indexes. I’ll make it explicit.

Therefore the only other options are realtime which, if you have a good admin has been thoroughly disabled or something like this:
Create a scheduled search with a cron of * * * * * to run every minute that looks like this:

|mstats latest_time(_value) AS _time WHERE index="*" BY host
| stats max(_time) AS _time BY host
| eval now = now()
| eval latency = now - time
| fieldformat latency = tostring(latency, "duration")
| fieldformat now = strftime(now, "%F %T")
| where latency > 60
| outputlookup append=t YOUR_LATENCY_TEST_FILE.csv

Then you can check later with this:

|inputlookup YOUR_LATENCY_TEST_FILE.csv
| stats max(now) AS now BY asset _time
| eval latency = now - _time
| rename _time AS time
| sort 0 - latency
| fieldformat time = strftime(time, "%c")
| fieldformat now = strftime(now, "%c")
| eval latency = tostring(latency, "duration")
| stats list(*) AS * BY asset

View solution in original post

woodcock
Esteemed Legend

Support confirmed both of these:

1: Sorry to be the bearer of bad news here, but we don’t store _indextime for metric indexes, and this means that currently there isn’t a way to measure metric index latency. We might be able to make per-source latency numbers available in metrics.log in the future – stay tuned.
2: Thanks for this feedback. You are correct, the walklex command only applies to events not metrics. It does mention the tsidx but I don’t know if many users will understand that tsidx applies to event indexes. I’ll make it explicit.

Therefore the only other options are realtime which, if you have a good admin has been thoroughly disabled or something like this:
Create a scheduled search with a cron of * * * * * to run every minute that looks like this:

|mstats latest_time(_value) AS _time WHERE index="*" BY host
| stats max(_time) AS _time BY host
| eval now = now()
| eval latency = now - time
| fieldformat latency = tostring(latency, "duration")
| fieldformat now = strftime(now, "%F %T")
| where latency > 60
| outputlookup append=t YOUR_LATENCY_TEST_FILE.csv

Then you can check later with this:

|inputlookup YOUR_LATENCY_TEST_FILE.csv
| stats max(now) AS now BY asset _time
| eval latency = now - _time
| rename _time AS time
| sort 0 - latency
| fieldformat time = strftime(time, "%c")
| fieldformat now = strftime(now, "%c")
| eval latency = tostring(latency, "duration")
| stats list(*) AS * BY asset

woodcock
Esteemed Legend

I also tried to dig into the guts with walklex but apparently that command does not support metrics index.

Get Updates on the Splunk Community!

Monitoring MariaDB and MySQL

In a previous post, we explored monitoring PostgreSQL and general best practices around which metrics to ...

Financial Services Industry Use Cases, ITSI Best Practices, and More New Articles ...

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Splunk Federated Analytics for Amazon Security Lake

Thursday, November 21, 2024  |  11AM PT / 2PM ET Register Now Join our session to see the technical ...