This is really strange.I am seeing in the job manager that there are many jobs running with created at date 12/31/69 with runtime waiting and status running .I tried deleting those jobs but they kept on coming back .How to resolve this issue .We dont have any real time searches running.I changed all of them to scheduled.
Thanks in Advance
This should get you pretty close to the information your'e looking for:
| rest splunk_server=local count=0 /services/search/jobs
| table dispatchState, custom.search, earliestTime, latestTime, isSavedSearch, published, request.earliest_time, eai:acl.owner
There are dozens of additional fields you can include as well.
Check if there is any "All Time" scheduled real-time search running/configured on that instance.
I checked for real time searches and there are none.How can I check specifically for searches running all time?
Give this a try
| rest timeout=0 splunk_server=local /servicesNS/-/-/saved/searches search="is_scheduled=1 disptach.earliest_time=0"
| table title eai:acl.owner eai:acl.app dispatch.*t_time search cron_schedule
Results with dispatch.latest_time as blank OR "now" are the searches running with all time timerange
There is nothing ...no results.The jobs that are having those Dispatched at 12/31/69 .These are all searches made from API calls.When these API calls were stopped I dont see these jobs.BUt when the API calls start I see the jobs again
I checked the jobs that are running with that date .I see the search like below with _index_earliest and _index_latest ..what does this mean ?
index=abc sourcetype=xyz earliest=-4h host=abbb _index_earliest=1581099754 _index_latest=1581099799 | sort 0 +_indextime | eval message=_raw | table _raw,_indextime,host
Splunk has two default timestamp field added to each event, time which is the time when event occurred (set based on timestamp parsing rules) and _indextime which is the time when Indexers stored the data into Splunk. The timerange picker (and earliest/lasted filters in search) filters the data based on _time, e.g. Events happening in last 60 minutes. The _index_earliest and _index_latest timerange modifier filter data based on _indextime value. Generally the timerange for searches using `_index*` modifier is quite high, so that all the possible events are within the range. The timerange in those searches can be "All time".
So are these searches causing the issue that I am looking for and what would I need to do in order to get these get to normal state
So these jobs, they stay forever after you invoke your REST API job once??
I think you can specify the expiration of search job (ttl- time to live) when doing a REST API call. If possible, ask the REST API owner to include that.
https://answers.splunk.com/answers/37452/specifying-a-ttl-when-creating-a-job-via-the-api.html
I checked the ttl it is 600 seconds but still see this jobs in the same state
Those are the date/time ranges of the search, in Unix timestamp.
1581099754 = Friday, February 7, 2020 12:22:34 PM (CST)
1581099799 = Friday, February 7, 2020 12:23:19 PM (CST)
Thank you .What would be the solution for me to implement to make the jobs going to run on that wierd date and they dont have runtime waiting and status running