Knowledge Management

Summary Index Backfill Jobs Stuck

Splunk Employee
Splunk Employee

I've set up an EC2 cluster with 19 indexers and 1 Search Head. I've loaded a large amount of data on the indexers, and I'm now backfilling a summary index. I executed the following command:

nohup ./splunk cmd python fill_summary_index.py -app MyApp -name "summary_data_daily" -et -6w@d -lt @d -j 2 -auth admin:mycredentials -owner admin &

... 4 hours ago. The job report shows the searches at status Running (100%), with runtimes around 1 hour 40 minutes. It hasn't completed the search, and it hasn't started searches for any of the following days.

I looked through the _internal logs, and didn't see any signs of why it's happening.

Any ideas?

Tags (2)
0 Karma
1 Solution

Splunk Employee
Splunk Employee

With an overnight of waiting, it turns out the answer is just that the 100% is not necessarily 100% for big jobs. The jobs ended up processing for another two and a half hours after they showed Running (100%), and then finally completing. The Job size here was 782 mb, with ~91 million events, so I would guess that's just a facet of big jobs being challenging to finish.

View solution in original post

Splunk Employee
Splunk Employee

With an overnight of waiting, it turns out the answer is just that the 100% is not necessarily 100% for big jobs. The jobs ended up processing for another two and a half hours after they showed Running (100%), and then finally completing. The Job size here was 782 mb, with ~91 million events, so I would guess that's just a facet of big jobs being challenging to finish.

View solution in original post

State of Splunk Careers

Access the Splunk Careers Report to see real data that shows how Splunk mastery increases your value and job satisfaction.

Find out what your skills are worth!