Currently, our jobs directory is more than full. To fix this, we wanted to change the expiry time of jobs so they would be deleted from the jobs (as we have no need for historic ones to be stored at all). However, setting the dispatch.ttl doesn't appear to have fixed this. We have the following setting for a saved search:
[test_page_pivot] action.email = 1 action.email.inline = 1 action.email.sendresults = 1 action.email.subject = TEST Splunk Alert: $name$ action.email.to = firstname.lastname@example.org alert.digest_mode = True alert.expires = 30m alert.severity = 1 alert.suppress = 0 alert.track = 0 auto_summarize.dispatch.earliest_time = -1d@h cron_schedule = */10 * * * * dispatch.earliest_time = -24h@m dispatch.ttl = 1p enableSched = 1 search = | pivot TEST_page_pivot TEST_page_object count(TEST_page_object) AS "Total" SPLITCOL platform SPLITROW app_id AS "app_id" | search app_id="*" | rename VALUE AS unknown | addtotals
As you can see, dispatch.ttl is set to 1p for this query which runs every 10minutes. Therefore it should be reaped from the jobs after 10minutes, however this is not the case .
Inside jobs, the expiry time keeps being extended, even if it initially shows the expected expiry time. What's causing this?
The answer to your conundrum lies in "alert.email = 1". Any job which triggers an alert action (email, "alert", etc) takes on the TTL of that action. Check alert_actions.conf; stanza headers align with the <foo> part of "alert.<foo>" in savedsearches.conf.