Hello, Indeed, my concern is about performance for both indexation and search. Because I can see from time to time, that _indextime values is getting far from the event timestamp: with a gap from 5 to 100 seconds, when the average is 30ms(200ms at the worst) Actually, we are indexing json formatted logs, and yes, using indexed extraction = json (set in the source Type) So, I guess all fields are indexed automatically. Then, checking with walklex: | walklex index="my_index" type=field | search NOT field=" *" | stats list(distinct_values) by field It shows, for last 60min: events:10272 Number of fields: 403 And some of them with uniq values, like field list(distinct_values) sessionid 193249 220320 204598 201715 214656 183875 195165 196683 221079 204274 215453 186199 181808 198200 178018 192400 184038 176133 205139 205432 186822 174164 196244 185719 179251 197758 203770 190584 178399 "avoiding indexed fields is sound as a general rule of thumb" If I understand well, the best should be to avoid fields with large number of uniq values to be indexed, and index only fields with low number of possible values(success/failed, green/yellow/red...,) Then, in my case, what could be a better configuration to reduce number of indexed fields, and to index only the fields with low cardinality, as you mentioned? Again, thank you all for your time/support Regards
... View more