For comparing indexer performance, understanding operators, or log levels (INFO .. CRIT), you have to leverage search.log. Indexing it isn't scalable though. This app will, every five minutes, launch a scripted input that will review the local dispatch directory for any search.log files, and the parse out the details into a JSON blob that will be put in index=_internal.
Details so far:
* Per Search Peer: # of Results
* Per Search Peer: Amount of time reported
* Time to set up search peers
* Count by log level of the search (e.g., INFO / WARN / ERROR / FATAL)
* Count by log operator of the search (e.g., “SearchOperator:kv” vs LMConfig)
* Time taken per operation (e.g., how many slow operations were there, how many fast ones — this can help identify particularly expensive regex / eval, etc.)
* SearchID
* Time Run
Overall, this should allow some trending information, and specifically to be able to detect an indexer that is slower than the rest.