Deployment Architecture

Why would Splunk NOT obey "dispatch.ttl" and delete results/artifacts early?

woodcock
Esteemed Legend

We have a not-at-all overloaded ES search head with a separate volume for dispatch with plenty of room that gives us 500MB warnings. We also have a few weekly-scheduled searches which bring back 100ish rows of results with dozensish fields with default values of "2p" for "dispatch.ttl" but the results are always gone after 2 days. We are on 7.3.latest.

We have tried setting it to 2 weeks worth of seconds and that did not work. What could be causing this? What logs should I look at/for?

0 Karma
1 Solution

matthewhasty
Explorer

I don't think it has to do with the amount of space in your dispatch directory. If it was completely full it should not delete the jobs, instead it should not allow any more searches to be dispatched. Do these searches have any addition actions such as e-mail, etc? The ttl for those actions may be overwriting. Alert actions like e-mail have a live time of 24 hours, which when taken with the default of 2x this value, would put it at 2 days, which is exactly what you are seeing.

alert_actions.conf is where this would be modified.

View solution in original post

matthewhasty
Explorer

I don't think it has to do with the amount of space in your dispatch directory. If it was completely full it should not delete the jobs, instead it should not allow any more searches to be dispatched. Do these searches have any addition actions such as e-mail, etc? The ttl for those actions may be overwriting. Alert actions like e-mail have a live time of 24 hours, which when taken with the default of 2x this value, would put it at 2 days, which is exactly what you are seeing.

alert_actions.conf is where this would be modified.

Get Updates on the Splunk Community!

[Puzzles] Solve, Learn, Repeat: Character substitutions with Regular Expressions

This challenge was first posted on Slack #puzzles channelFor BORE at .conf23, we had a puzzle question which ...

Shape the Future of Splunk: Join the Product Research Lab!

Join the Splunk Product Research Lab and connect with us in the Slack channel #product-research-lab to get ...

Auto-Injector for Everything Else: Making OpenTelemetry Truly Universal

You might have seen Splunk’s recent announcement about donating the OpenTelemetry Injector to the ...