There is no information on any jobs that can be ran within Splunk to auto remove these stagnant searches. There should be an automation or task that can be setup or schedualed to remove these so there are no longer any messages unless they are started within a specified time frame (which does not seem to exist). Help please it is an annoying message.
There is an automatic way. There is a setting in savedsearchs.conf = dispatch.ttl. Changing this will clean up your searches faster, but you have to do it via the conf file.
http://docs.splunk.com/Documentation/Splunk/5.0.5/Admin/savedsearchesconf
dispatch.ttl = <integer>[p]
* Indicates the time to live (in seconds) for the artifacts of the scheduled search, if no
actions are triggered.
* If the integer is followed by the letter 'p' Splunk interprets the ttl as a multiple of the
scheduled search's execution period (e.g. if the search is scheduled to run hourly and ttl is set to 2p
the ttl of the artifacts will be set to 2 hours).
* If an action is triggered Splunk changes the ttl to that action's ttl. If multiple actions are
triggered, Splunk applies the largest action ttl to the artifacts. To set the action's ttl, refer
to alert_actions.conf.spec.
* For more info on search's ttl please see limits.conf.spec [search] ttl
* Defaults to 2p (that is, 2 x the period of the scheduled search).
Thank you will try today.