Deployment Architecture

Regenerating the bucket manifest

oofaustoo
Explorer

Ever since upgrading from Splunk 4.2 to 4.3.5 I am getting the following in my splunkd.log:

01-02-2013 14:26:43.994 -0500 INFO databasePartitionPolicy - Regenerating the bucket manifest (index=ss)...
01-02-2013 14:27:05.091 -0500 INFO databasePartitionPolicy - Completed regenerating the bucket manifest.
01-02-2013 14:27:09.148 -0500 INFO databasePartitionPolicy - Regenerating the bucket manifest (index=ss)...
01-02-2013 14:27:30.527 -0500 INFO databasePartitionPolicy - Completed regenerating the bucket manifest.
01-02-2013 14:27:34.739 -0500 INFO databasePartitionPolicy - Regenerating the bucket manifest (index=ss)...
01-02-2013 14:27:56.194 -0500 INFO databasePartitionPolicy - Completed regenerating the bucket manifest.

index=ss is my highest volume index (roughly 30GB incoming per day). Ever since upgrading to 4.3.5 from 4.2 this "manifest regeneration" is kicking off every 30 seconds and taking 20+ seconds to complete.

My index latency has been abysmal since the upgrade (roughly 2 hours behind during peak hours). Can I assume that during this "manifest regeneration" that indexing is paused while it does it's thing and thus causing indexing to lag?

Anyone run into this? If so, any fix?

Thanks!

Tags (1)
0 Karma
1 Solution

oofaustoo
Explorer

Setting serviceMetaPeriod in /opt/splunk/etc/system/local/indexes.conf solved the problem (thanks to Splunk Support for the solution!). The default serviceMetaPeriod is 25 seconds. Since the regenerate for one of my high volume indexes was taking in excess of 22 seconds incoming data was only being serviced by the indexer pipeline for around 3 seconds every 25 seconds. That was causing all my indexqueue congestion.

Example (from local indexes.conf):

[default]
serviceMetaPeriod = 150

View solution in original post

0 Karma

oofaustoo
Explorer

Setting serviceMetaPeriod in /opt/splunk/etc/system/local/indexes.conf solved the problem (thanks to Splunk Support for the solution!). The default serviceMetaPeriod is 25 seconds. Since the regenerate for one of my high volume indexes was taking in excess of 22 seconds incoming data was only being serviced by the indexer pipeline for around 3 seconds every 25 seconds. That was causing all my indexqueue congestion.

Example (from local indexes.conf):

[default]
serviceMetaPeriod = 150

0 Karma
Get Updates on the Splunk Community!

NEW! Log Views in Splunk Observability Dashboards Gives Context From a Single Page

Today, Splunk Observability releases log views, a new feature for users to add their logs data from Splunk Log ...

Last Chance to Submit Your Paper For BSides Splunk - Deadline is August 12th!

Hello everyone! Don't wait to submit - The deadline is August 12th! We have truly missed the community so ...

Ready, Set, SOAR: How Utility Apps Can Up Level Your Playbooks!

 WATCH NOW Powering your capabilities has never been so easy with ready-made Splunk® SOAR Utility Apps. Parse ...