I have several dashboards that are based on scheduled reports (most all set to run at 3 AM daily with a two hour time window). Our server is regularly running using all memory on the physical server (i.e. it's run out of physical memory). The dashboards based on these reports don't seem to pick up the results of the scheduled reports even if I manually change the schedule to "hourly with a five minute window" and wait for the report to run.
What can I do to further troubleshoot this issue?
Following up on this further... one person suggested that I need to go into the job and set the lifetime to 7 days:
From the docs: "You change the lifetime setting by selecting Job, Edit Job Settings and specifying the lifetime for the job." and "There are two lifetime settings, 10 minutes and 7 days. The default lifetime is 10 minutes. The lifetime starts from the moment the job is run. "
If you schedule a report for 2 AM and the default lifetime is 10 minutes then isn't scheduling the job pointless? Do I seriously have to edit each job again one by one and set it's lifetime to 7 days (why aren't other lifetime amounts supported)? Or is this "lifetime" not relevant to a scheduled report?
That isn't my understanding of how it is supposed to work. If you have a scheduled search, and the search has no actions, the search artifacts will stick around for 2x the search period. For example, for an hourly search, the artifacts are deleted after 2 hours. For a daily search, the artifacts are deleted after 2 days.
Check out this link, under the setting dispatch.ttl:
I'm not sure if the 2p configuration setting has ever worked. At least, I have not seen it ever work for me. The search takes the default setting of 24h (or whatever you have changed it to.)
Perhaps your schedule reports stop to trigger
go to Settings ---> Searches, reports, and alerts Open your schedule reports and verify the following options
For example the attributes of Alert that is :
Condition check for example always
Throttling Uncheck After triggering the alert, don't trigger it again for
Expiration specify How long Splunk keeps a record of each triggered alert.
And re test
Start by checking the status of your scheduled searches. Are they actually running successfully? You could be hitting a scheduled search or quota limit.
index=_internal sourcetype=scheduler user=your_username | timechart count by status index=_internal sourcetype=scheduler user=your_username | chart count over savedsearch_name by status
What version of Splunk are you running? There was a bug prior to 6.1 related to a similar problem. See this previous Splunk answer.
Thank you, this was very helpful. I can see that the jobs were "skipped". Apparently due to "maxsearches limit reached". Now I just have to figure out what to do about this. Thank you.
You might want to take a look at
Also, since you are running 6.3, you are using the new scheduler. So take a look at https://conf.splunk.com/session/2015/conf2015_PLucas_Splunk_SplunkEntWhatsNew_MakingTheMostOf.pdf.