I made a terrible mistake and tried to use Splunk as a non-admin for the first time in a year or so. With that mistake I experienced normal user woes of job queuing. In reaction to queuing I went to the job manager to delete all of my own jobs except the latest queued job I cared about. Upon deletion of older jobs my queued search did not resume within a reasonable period of time (within 5 seconds). I then went back to view the job activity monitor and saw that jobs I deleted seconds before were still present.
How long is someone expected to wait until queued jobs resume after deletion of older jobs? Seems like the desired effect only comes after a matter of minutes, not seconds. Is this configurable?
Hi
Those should removed quite soon. Usually it depends on your environment somehow.
Are you sure that you canceled and/or remove those jobs from all apps? For users it seems to show those only in current app in default mode when you have press Activity tab. Just select all for App selection and see that all have removed.
r. Ismo
Thank you for the quick take. I am confident that i searched for and invoked deletion of my jobs across all apps. I’m thinking it takes splunk a while to act on or confirm deletion and until so it jobs remain searchable in job activity manager. I have experienced this sort of problem in multiple clustered on-prem splunk implementations over the years and am frustrated on behalf of users for their non deterministic experience recovering from queuing through prescribed actions on job activity monitor.