I've set up an EC2 cluster with 19 indexers and 1 Search Head. I've loaded a large amount of data on the indexers, and I'm now backfilling a summary index. I executed the following command:
nohup ./splunk cmd python fill_summary_index.py -app MyApp -name "summary_data_daily" -et -6w@d -lt @d -j 2 -auth admin:mycredentials -owner admin &
... 4 hours ago. The job report shows the searches at status Running (100%)
, with runtimes around 1 hour 40 minutes. It hasn't completed the search, and it hasn't started searches for any of the following days.
I looked through the _internal logs, and didn't see any signs of why it's happening.
Any ideas?
With an overnight of waiting, it turns out the answer is just that the 100% is not necessarily 100% for big jobs. The jobs ended up processing for another two and a half hours after they showed Running (100%)
, and then finally completing. The Job size here was 782 mb, with ~91 million events, so I would guess that's just a facet of big jobs being challenging to finish.
With an overnight of waiting, it turns out the answer is just that the 100% is not necessarily 100% for big jobs. The jobs ended up processing for another two and a half hours after they showed Running (100%)
, and then finally completing. The Job size here was 782 mb, with ~91 million events, so I would guess that's just a facet of big jobs being challenging to finish.