Hello,
I have a Splunk Cloud deployment and the alerts are not firing. I have searched for information and using the search index=_internal sourcetype=scheduler status="skipped" savedsearch_name="search_name" you can see why the alerts are not going off. It says that the maximum disk usage quota for this user has been reached. The thing is that these alerts have no owner, the owner is "nobody", so if I am not mistaken the maximum disk usage quota is the default one. I think they don't recommend to change the default maximum disk usage quota.
I need these alerts to trigger, what can I do to fix this problem?
Thanks in advance and best regards.
Hi @effem2,
Thank you for your response! The truth is that I already solved the problem and I forgot that I had this question open. The reason this happened was that the alerts not having an owner was using the default maximum disk usage quota. It was solved by adding an owner to the alerts.
Regards
Its hard to say without a search-example. Could you post an example-search, which has the issue?
Usually
|fields - _raw
does help already.
Hi @effem2,
Thank you for your response! The truth is that I already solved the problem and I forgot that I had this question open. The reason this happened was that the alerts not having an owner was using the default maximum disk usage quota. It was solved by adding an owner to the alerts.
Regards