I have set up a single real time alert that creates about 1000 rtscheduler_ entries in
Otherwise I will have to increase the dispatchdirwarning_size in limits.conf which is not really a solution if I configure additional alerts.
[rtalert_nevis_err_requests_1min] action.email = 1 action.email.inline = 1 action.email.reportServerEnabled = 0 action.email.sendresults = 1 action.email.to = firstname.lastname@example.org alert.digest_mode = False alert.expires = 6h alert.suppress = 1 alert.suppress.fields = host alert.suppress.period = 30m alert.track = 0 cron_schedule = * * * * * dispatch.earliest_time = rt-1m dispatch.latest_time = rt displayview = flashtimeline enableSched = 1 quantity = 0 relation = greater than request.ui_dispatch_view = flashtimeline search = sourcetype="proxy" | stats sum(req) as req sum(req_4xx) as req_4xx sum(req_5xx) as req_5xx by host | eval error_rate=if(req==0,0,round((req_4xx+req_5xx)/req,3)) | where error_rate>0,5
You dnt have option other than to manually delete the data from your dispatch directory if your alert creates 1000 entries . This is taken care by default from Splunk only.
Hi Chris,You dnt have option other than to manually delete the data from your dispatch directory if your alert creates 1000 entries . This is taken care by default from Splunk only.
Thank you for taking time to reply, manually deleting the directories does not solve the problem
I had to set the alert condition of the alerts to something different that "always". This prevents splunk from creating a directory every time the alert is run/triggered which is a lot for rt alerts.