We use Splunk> 6.4.4 and sometime have memory-intensive searches in the webapp.
After I wondered why the result are obviously wrong I had a look at the search.log and found an error and the message that the results may be incomplete.
(02-15-2017 09:56:05.815 ERROR StatsProcessor - Reached limit max_mem_usage_mb (10240 MB), results may be incomplete! Please increase the max_mem_usage_mb in limits.conf .
)
I think it is a disastrous strategie to suggest the user all is fine while Splunk> obviously knows the result are incomplete
Why Splunk> gives me just a green sign next to the job-pulldown and not an alert-symbol? Can I config this behaviour or is it a bug?
best regards
Marco
Hi marcokrueger, as lguinn mentioned, I don't believe there is any mechanism to adjust the warning symbols. Nothing at least that I'd expect to be supported.
However, to address the initial concern, I expect for most cases the searches are fine, and you shouldn't be concerned about incomplete searches. As limits.conf describes, searches that exceed this limit are spilled to disk. This might have a negative impact on search performance, but the searches should still be complete. The exception to this is if you are making heavy use of the mvexpand command, which can lead to truncated searches.
Please let me know if this answers your question! 😄
Hi muebel,
thank you for the answer. The affected queries dont use mvexpand, but uses eventstats to enrich information to already existing events. The performance dont care in this cases and the searches were completed. The wrong results are obvious when the new fields are shown in the output and you see some of them are missing, but if the user makes another statisitc over this values the error will be masked and the user may trust in completely wrong data.
Without doubt, the behaviour is described completely in the documantation of eventstats, but this doesn't mitigates the pain if you uses wrong data you trust in, because there wasn't any warning that the memory limit was reached. I can't demand from the users to search the search.log for errors. They just want to trust the warning symbol.
To pretend this I will increase the limit, but tomorrow the next user comes round the corner and exhaust the new limit, so I suggest for a future version, Splunk> shows a warning when eventstats goes out of memory and Splunk> stops adding the requested fields to the search results.
Best regards
Marco
It is not a bug.
So it's a feature?
My pain is to explain the users that all their results may be corrupt or may be okay...depending on the day of week or their luck...
I think you should explain to users that they should take note of the information button when it appears. Sadly, there is no way AFAIK to change the color or symbol.
If such large searches are common, perhaps you should make the suggested change to limits.conf.
Also, do these search errors (or related messages) appear in any logs other than the search.log? The search.log is transient and not collected into the _internal index. If a message appears in either the _internal or _audit index when this occurs, you could set up an alert to detect it...
Finally, you might post some of the offending searches on this forum. There are many people here who are experienced at optimizing searches to reduce resource usage.