We have a search that runs that generate a large number of results; for each result we need to take an alert action (individually). While I've increased the maxtime from the default 5min to 3h hours, looking at tracing logs from the alert action, it stops running after 5 minutes despite only having processed a fraction of the search results.
For the claim its only processed a fraction of the results:
To increase the maxtime, I initially set it just for this alert action; the search head is dedicated to running alert actions, so I then increased it globally just in case it would matter. After both changes, I validate the setting with btool, and then restarted the Splunk instance.
Edit: It looks like when I cloned the search so I wasn't modifying the production copy, it added more fields in the savedsearches.conf including the following setting:
action.<redacted custom alert action name>.maxtime = 5m
I increased that setting assuming it would be a limitation; it does not have appeared to have resolved the issue. My current assumption is that was a problem just not the complete problem.
What happened to the alert action run time after you increased the maxtime? Does the alert action stopped after 5 mins? In which configuration did you update the maxtime attribute?
Ideally it should be in alert_actions.conf under this alert action staza.
maxtime = <integer> [m|s|h|d]
Hope this helps.
Nothing happened to the alert action after I increased the maxtime. That is why I have an open question with no accepted solution.
I'm assuming that my initial statement about setting the maxtime wasn't clear, but I original set it in alert_actions.conf for this specific alert action. I later changed the default. Then in the update, I updated in savedsearches.conf since cloning it apparently copied it over to that configuration file.
@triest : Is your issue is resolved. I also have same issue. If it is fixed, please share the steps.