While @ITWhisperer 's solution is a neat trick, I'd rethink the search. 1. You're searching for "**mrstrategy**". That's gonna be slow. 2. First you're using automatic json extraction, then you cal...
See more...
While @ITWhisperer 's solution is a neat trick, I'd rethink the search. 1. You're searching for "**mrstrategy**". That's gonna be slow. 2. First you're using automatic json extraction, then you call spath. 3. Always be vigilant around "dedup" command. You're deduping on attributes then statsing on attributes and _time. You will only ever have one _time per attributes. And dedup will move processing to SH tier. 4. I have a hunch that you have some duplicate data which you want to get rid of in search time. Maybe it's worth reworking your ingestion process so you're not wasting license? 5. Unfortunately, as you must have noticed already - this is a very ugly data format (from Splunk's point of view). This whiole "keyname=key1,value=something" schema is very inconvenient for searching and processing since you firstly have to read, parse, and interpret all events to get to the "contents". So now you're bending over backwards to do something which should be easy as just writing a simple filter condition. Are you sure you don't have someone in your org to sit with and have a chat about the data format? Or about the ingestion process - maybe it's worth setting up something that wil, transform your data into something more reasonable?