Deployment Architecture

Regenerating the bucket manifest

oofaustoo
Explorer

Ever since upgrading from Splunk 4.2 to 4.3.5 I am getting the following in my splunkd.log:

01-02-2013 14:26:43.994 -0500 INFO databasePartitionPolicy - Regenerating the bucket manifest (index=ss)...
01-02-2013 14:27:05.091 -0500 INFO databasePartitionPolicy - Completed regenerating the bucket manifest.
01-02-2013 14:27:09.148 -0500 INFO databasePartitionPolicy - Regenerating the bucket manifest (index=ss)...
01-02-2013 14:27:30.527 -0500 INFO databasePartitionPolicy - Completed regenerating the bucket manifest.
01-02-2013 14:27:34.739 -0500 INFO databasePartitionPolicy - Regenerating the bucket manifest (index=ss)...
01-02-2013 14:27:56.194 -0500 INFO databasePartitionPolicy - Completed regenerating the bucket manifest.

index=ss is my highest volume index (roughly 30GB incoming per day). Ever since upgrading to 4.3.5 from 4.2 this "manifest regeneration" is kicking off every 30 seconds and taking 20+ seconds to complete.

My index latency has been abysmal since the upgrade (roughly 2 hours behind during peak hours). Can I assume that during this "manifest regeneration" that indexing is paused while it does it's thing and thus causing indexing to lag?

Anyone run into this? If so, any fix?

Thanks!

Tags (1)
0 Karma
1 Solution

oofaustoo
Explorer

Setting serviceMetaPeriod in /opt/splunk/etc/system/local/indexes.conf solved the problem (thanks to Splunk Support for the solution!). The default serviceMetaPeriod is 25 seconds. Since the regenerate for one of my high volume indexes was taking in excess of 22 seconds incoming data was only being serviced by the indexer pipeline for around 3 seconds every 25 seconds. That was causing all my indexqueue congestion.

Example (from local indexes.conf):

[default]
serviceMetaPeriod = 150

View solution in original post

0 Karma

oofaustoo
Explorer

Setting serviceMetaPeriod in /opt/splunk/etc/system/local/indexes.conf solved the problem (thanks to Splunk Support for the solution!). The default serviceMetaPeriod is 25 seconds. Since the regenerate for one of my high volume indexes was taking in excess of 22 seconds incoming data was only being serviced by the indexer pipeline for around 3 seconds every 25 seconds. That was causing all my indexqueue congestion.

Example (from local indexes.conf):

[default]
serviceMetaPeriod = 150

0 Karma
Get Updates on the Splunk Community!

Observability Unlocked: Kubernetes & Cloud Monitoring with Splunk IM

Ready to master Kubernetes and cloud monitoring like the pros? Join Splunk’s Growth Engineering team on ...

Index This | What did the zero say to the eight?

June 2025 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this month’s ...

Splunk Observability Cloud's AI Assistant in Action Series: Onboarding New Hires & ...

This is the fifth post in the Splunk Observability Cloud’s AI Assistant in Action series that digs into how to ...