I’m seeing a discrepancy between the results from the | metadata type=hosts command and the actual event data in my index. I have an alert that monitors hosts that stop reporting events, and it’s based on | metadata. When I run the metadata query, it shows that the last event for a specific host was about 90 days ago. However, when I search manually using index=<my_index> host=<my_host>, I can see that this host actually reported events as recently as 15 days ago.
It seems like the metadata command isn’t picking up the most recent activity for this host. I’d like to understand why this happens — is there a delay or a condition that prevents metadata from updating? Is there any way to force metadata to refresh, or to prevent these discrepancies in the future?
Any insights or best practices for keeping metadata accurate would be greatly appreciated.
As @bowesmana mentioned, metadata is not reliable always for this use case. I have seen metadata can become outdated if new buckets are created.
So i always prefer to run
| tstats latest(_time) as lastSeen where index=<my_index> by host
Regards,
Prewin
🌟If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Although I can't give an absolute answer to your question, I do know that metadata has always been an unreliable source of truth. In the docs, for example, there is this somewhat opaque statement
However, in environments with large numbers of values for each category, the data might not be complete
so I have never used that to report on missing data. I use tstats to give more reliable results for that exact scenario, to identify hosts that stop sending data, which is also pretty quick.
If you can easily change to something like
tstats min(_time) as firstEvent max(_time) as lastEvent count by hostthen that will be far more reliable.