Apparently splunk is behaving correctly, but sometimes using historical searches in order to reduce search time does not work. I only see in the splunkd log several similar messages like this:
ERROR DistributedBundleReplicationManager - HTTP response code 400 (HTTP/1.1 400 Bad Request). Error applying delta=/opt/splunk/var/run/searchpeers/SV19APSP027-1520781662-1520781908.delta, searchHead=SV19APSP027, prevTime=1520781662, prevChksum=13407166704320477873, curTime=1520781908, curChksum=14856987335408760282: Latest bundle has conflicting timestamp: searchHead=SV19APSP027, actual=1520
But also other messages like this, although doesn't seem to be related:
ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/etc/apps/search/bin/sendemail.py "results_link=http://SV19APSP027:8000/app/search/search?q=%7Cloadjob%20rt_scheduler__acalasanz__search__RMD58cdfb2..." "ssname=SPK0003-US-Cambio de Contraseña" "graceful=True" "trigger_time=1520871180" results_file="/opt/splunk/var/run/splunk/dispatch/rt_scheduler_acalasanzsearch_RMD58cdfb26d24a1b7d9_at_1520870880_29688.0/per_result_alert/tmp_0.csv.gz"': ERROR:root:[Errno -3] Temporary failure in name resolution while sending mail to: email@example.com, firstname.lastname@example.org, email@example.com
version of Splunk: 7.02
By the way, I will add some additional information. In the moment of the problem, in the splunk's console it appears the following message:
Dispatch command: The number of search artifacts in the dispatch directory is higher than recommended (count=32536,, warning threshold=5000) and could have an impact on search performance. Remove excess search artifacts using "splunk clean-dispatch" CLI command, and review artifact retention policies in limits.conf and saved searches.conf. You can also raise this warning threshold in limits.conf / dispatch_dir_warning_size.
We followed this instructions but the same issue persists.