I SSH into our master node and ran the backfill script:
sudo -s cd /opt/splunk/bin ./splunk cmd python fill_summary_index.py -app search -name "summary - my daily report" -et -29d@d -lt -11d@d -nolocal true -dedup true -auth 'admin:NOTTHEPASSWORD'
The output looks like:
*** For saved search 'summary - my daily report' *** *** Spawning a total of 17 searches (max 1 concurrent) *** Executing summary - my daily report for UTC = 1448928000 (Tue Dec 1 00:00:00 2015) waiting for job sid = 'admin__admin__search__RMD5a5399964bbfdaabb_at_1448928000_3638' ... Finished Executing summary - my daily report for UTC = 1449014400 (Wed Dec 2 00:00:00 2015) waiting for job sid = 'admin__admin__search__RMD5a5399964bbfdaabb_at_1449014400_3639' ... Finished [... All the days... ] Executing summary - my daily report for UTC = 1450310400 (Thu Dec 17 00:00:00 2015) waiting for job sid = 'admin__admin__search__RMD5a5399964bbfdaabb_at_1450310400_3655' ... Finished
But nothing happens in the actual index. When I run this search in Splunk:
index="myreport_summary" | overlap
Found gap in saved search 'summary - my daily report' between search ids: '1448958811.359' and '1450425606.663' from 'Tue Dec 1 00:00:00 2015' to 'Thu Dec 17 00:00:00 2015'
Splunk Version 6.2.2 Splunk Build 255606
Any ideas what would be causing this? Do I need to run this on the indexers independently? Do I need to run it with different parameters? Any help is appreciated. Thanks!
This should be run on a search head when the app knowledge objects and saved searches exists.
It needs to run "as if" a user would be running it as per your normal schedule.
What do you mean when you say "master node". Are you running it on an indexer?
What is our splunk architecture? ie. shc, idx cluster, allin1 etc.