We have data that always get indexed at a rate greater than 1800 seconds old (the maximum monitoring lag permitted). This is due to a variety of issues from the frequency of data getting generated to transport.
What is the recommended way to create a KPI that takes a count of events over 15 minutes that have a timestamp from 45 to 60 minutes before "now" run every 15 minutes?
I have tried specifically setting
"earliest=-60m@m latest=-45m@m" which seems to capture the data (monitoring lag = 0). Is this the right way to handle such a case?
Secondary to that is there affect on the upstream service analyzer score? The service I have running like this always has a score of N/A which I'm thinking has something to do with timing.
Please find what is monitoring lag. You can create KPI based on your data, there is no restriction in writing KPI base-search or adhoc search. As far as your search is not missing any data or considering duplicate data you can use that search.
Hope this helps!!!
@VatsalJagani Thanks for that. I have read that about 15 times. It all makes sense. I guess it means that the lag you are accounting for is done via the event timestamp not the index time.
In 3.1.4, this is limited to 1799 seconds in the UI. My question is what is the best approach to account for lags that are longer than this. In a base search do I just use earliest and latest? Do I hack the shared saved search that is resultant from the base search?
If you know your events are lagging more than that always than yeah you can use the earliest and latest in your query.
But if your data is lagging more than that then you need to once check your data source. Ideally, it should not lag longer.
This might work. Does anyone know what ITSI uses in its lag calculations? Index time or timestamp? For example, if I specify a search lag of 10 minutes, is that searching for timestamps 10 minutes ago or index times 10 minutes ago? Will make a big difference.