Hi, I have an alert which executes a very simple search. The search consists of a macro invoked 40 times, each time with different input parameters. This is the search inside the alert:
`macro_level_alarm(tag="C245", volmax="1520", name="245")`
| append
[search `macro_level_alarm(tag="C246", volmax="1520", name="246")`]
...
...
...
| append
[search `macro_level_alarm(tag="C518", volmax="600", name="518")`]
| table Name ValueLatestEvent TimeLatestEvent
The macro's query is again very simple. It simple executes a plain search over the latest 15 minutes and gets the most recent value. Here is the query:
source="***" index="***" (Tag="$tag$")
| streamstats latest(_time) as latest_time by Tag| where _time=latest_time
| eval ValueLatestEvent=round(((Value*100)/$volmax$),1)
| eval Name= "$name$"
| convert timeformat="%Y-%m-%d %H:%M" ctime(_time) AS TimeLatestEvent
| table Name ValueLatestEvent TimeLatestEvent
The alert is scheduled to execute every 4 hours.
The problem I'm facing is that, when I execute this alert invoking the macro only 10 times, everything is fine and I get the result in few seconds. If I try to invoke the macro more than 10 times (eg: 11 or 40 as in this case) Splunk gets stucked in parsing, then times out and returns no result.
I guess there are some parameters to change inside limits.conf. I've already modified
[search]
max_rt_search_multiplier
but I'm not sure this is the right one as nothing changed and my alert still gets stucked.
Do you have ever experienced anything like this and have any idea on how to solve it?
Thank you so much
... View more