I am using report acceleration.
My orginal report was for 1 hr.
index=ckpfw002 sourcetype=opsec action=blocked OR action=dropped
| timechart count
I accelerated the report for 30 days.
Now that it is 100% completed, when I run the report for say 7 days, it says "Dispatch Command: The search processs with sid=1554406128.9740437C9C149-435D-43B6-AA71-9D2A5518DF5F was forcefully terminated because its physical memory usage (28177.609000 MB) has exceeded the 'searchprocessmemoryusage_threshold' (24000.000000 MB) setting in limits.conf."
When I look at the job inspector, it is using the accelerated report --> [splunk-idx-1023] Using summaries for search, summaryid=C67F4BC3-E7CF-4AC4-9CF9-090758F478F6searchu621929NS000f0d20f92d3c54, maxtimespan=30m
I am trying to do a timechart for the entire month, but it fails even when I select 7 days. Any Suggestions?
I would make a summary index, accelerated searches take a lot of indexer resources. https://docs.splunk.com/Documentation/Splunk/7.2.5/Knowledge/Usesummaryindexing
I understand accelerated searches take a lot of indexer time when they are building, but the accelerated report (30 days) is 100% complete. So, when I run the report (7 days) that uses the completed report acceleration, it should be pulling from the completed acceleration report, correct?
You'll need to increase the threshold within limits.conf.
By default, searchprocessmemoryusagethreshold is set to 4GB (version dependent), but that setting is overruled by searchprocessmemoryusagepercentage_threshold .
Both require that enablememorytracker be set to true, and in that case a process is killed when it exceeds the default value of 25% set by searchprocessmemoryusagepercentage_threshold.
Stanza from limits.conf:
searchprocessmemoryusagepercentage_threshold = float
To use this setting, the “enablememorytracker” setting must be set
Specifies the percent of the total memory that the search process is
entitled to consume.
* Search processes that violate the threshold percentage are terminated.
* If the value is set to zero, then splunk search processes are allowed to
grow unbounded in terms of percentage memory usage.
* Any setting larger than 100 or less than 0 is discarded and the default
value is used.
* Default: 25%
I understand where the limitation is, but why is it hitting that limitation if the report is using the accelerated reports data that has already been gather 100%.
My Accelerated report to collect firewall stats is 100% completed. I am running a timechart on the data that has already been collected. So in my mind the way accelerated reports work, my report should just be pulling the stats from the accelerated reports which should be minimal. Is my understanding of accelerated reports incorrect?
What is the size of your index and datamodel acceleration?
The datamodel acceleration results reside on the indexers, and still have to be pulled down to the search heads for you to view. Depending on the size of your search artifacts, and those of other search activity happening on the SHC, you could still very well hit the limits.
Also, when you accelerate 30 days of data (or any range), that 30 days is rolling. Meaning, the scheduler runs jobs in the background to keep your acceleration up to date as new data comes in. Those also count against the numbers mentioned above.