Knowledge Management

Why is fill_summary_index not actually backfilling my summary index?

adamb0mb
Explorer

I SSH into our master node and ran the backfill script:

sudo -s
cd /opt/splunk/bin
./splunk cmd python fill_summary_index.py -app search -name "summary - my daily report" -et -29d@d -lt -11d@d -nolocal true -dedup true -auth 'admin:NOTTHEPASSWORD'

The output looks like:

*** For saved search 'summary - my daily report' ***

*** Spawning a total of 17 searches (max 1 concurrent) ***

Executing summary - my daily report for UTC = 1448928000 (Tue Dec  1 00:00:00 2015)
  waiting for job sid = 'admin__admin__search__RMD5a5399964bbfdaabb_at_1448928000_3638' 
  ... Finished

Executing summary - my daily report for UTC = 1449014400 (Wed Dec  2 00:00:00 2015)
  waiting for job sid = 'admin__admin__search__RMD5a5399964bbfdaabb_at_1449014400_3639' 
  ... Finished

[... All the days... ]

Executing summary - my daily report for UTC = 1450310400 (Thu Dec 17 00:00:00 2015)
  waiting for job sid = 'admin__admin__search__RMD5a5399964bbfdaabb_at_1450310400_3655' 
  ... Finished

But nothing happens in the actual index. When I run this search in Splunk:

index="myreport_summary" | overlap

It yields:

Found gap in saved search 'summary - my daily report' between search ids: '1448958811.359' and '1450425606.663' from 'Tue Dec 1 00:00:00 2015' to 'Thu Dec 17 00:00:00 2015'

Some details:

Splunk Version 6.2.2
Splunk Build 255606

Any ideas what would be causing this? Do I need to run this on the indexers independently? Do I need to run it with different parameters? Any help is appreciated. Thanks!

0 Karma
1 Solution

Lucas_K
Motivator

This should be run on a search head when the app knowledge objects and saved searches exists.

It needs to run "as if" a user would be running it as per your normal schedule.

What do you mean when you say "master node". Are you running it on an indexer?

What is our splunk architecture? ie. shc, idx cluster, allin1 etc.

View solution in original post

Lucas_K
Motivator

This should be run on a search head when the app knowledge objects and saved searches exists.

It needs to run "as if" a user would be running it as per your normal schedule.

What do you mean when you say "master node". Are you running it on an indexer?

What is our splunk architecture? ie. shc, idx cluster, allin1 etc.

adamb0mb
Explorer

I ran this on the Search Head, and that solved the problem. Thanks!

0 Karma
Get Updates on the Splunk Community!

How to send events & findings from AWS to Splunk using Amazon EventBridge

Amazon EventBridge is a serverless service that uses events to connect application components together, making ...

Exciting News: The AppDynamics Community Joins Splunk!

Hello Splunkers,   I’d like to introduce myself—I’m Ryan, the former AppDynamics Community Manager, and I’m ...

The All New Performance Insights for Splunk

Splunk gives you amazing tools to analyze system data and make business-critical decisions, react to issues, ...