I am seeing an odd behavior where my search event count is different when the exact query is run separately vs when used to build a table for large number of log entries like around 10 million entries for lesser values it seems to match.
Below is a rough example of what the query looks like
index="my_index"
"Message1"
OR "Message2"
OR "Message3"
| stats count
| fieldformat count =tostring(count,"commas")
| eval "Type"="Metric1"
| append [ search
index="my_index"
"Message2"
OR "Message3"
| stats count
| fieldformat count =tostring(count,"commas")
| eval "Type"="Metric2"
] | append [ search
index="my_index"
"Message1"
OR "Message3"
| stats count
| fieldformat count =tostring(count,"commas")
| eval "Type"="Metric3"
]| append [ search
index="my_index"
"Message2"
OR "Message3"
| stats count
| fieldformat count =tostring(count,"commas")
| eval "Type"="Metric4"
]
| table count, Type
If I run the query for Metric2 separately i get the right count vs if I run as part of building the table which is usually much lesser.
Also since I am searching for same message across these queries is it possible to reuse the count of these message and just add a row based on these message counts ?
Hi asubramanian,
check the results of your subsearches because there's a limit of 50,000 results in subsearches and maybe it's yous situation.
why you don't use a different approach with only one search (remember that Splunk isn't a DB!)?
Something like this:
index=my_index
| eval Type=case(searchmatch("Message1"),"Metrics1",searchmatch("Message2"),"Metrics2",searchmatch("Message3"),"Metrics3")
| stats count By Type
Ciao.
Giuseppe