Deployment Architecture

Regenerating the bucket manifest

oofaustoo
Explorer

Ever since upgrading from Splunk 4.2 to 4.3.5 I am getting the following in my splunkd.log:

01-02-2013 14:26:43.994 -0500 INFO databasePartitionPolicy - Regenerating the bucket manifest (index=ss)...
01-02-2013 14:27:05.091 -0500 INFO databasePartitionPolicy - Completed regenerating the bucket manifest.
01-02-2013 14:27:09.148 -0500 INFO databasePartitionPolicy - Regenerating the bucket manifest (index=ss)...
01-02-2013 14:27:30.527 -0500 INFO databasePartitionPolicy - Completed regenerating the bucket manifest.
01-02-2013 14:27:34.739 -0500 INFO databasePartitionPolicy - Regenerating the bucket manifest (index=ss)...
01-02-2013 14:27:56.194 -0500 INFO databasePartitionPolicy - Completed regenerating the bucket manifest.

index=ss is my highest volume index (roughly 30GB incoming per day). Ever since upgrading to 4.3.5 from 4.2 this "manifest regeneration" is kicking off every 30 seconds and taking 20+ seconds to complete.

My index latency has been abysmal since the upgrade (roughly 2 hours behind during peak hours). Can I assume that during this "manifest regeneration" that indexing is paused while it does it's thing and thus causing indexing to lag?

Anyone run into this? If so, any fix?

Thanks!

Tags (1)
0 Karma
1 Solution

oofaustoo
Explorer

Setting serviceMetaPeriod in /opt/splunk/etc/system/local/indexes.conf solved the problem (thanks to Splunk Support for the solution!). The default serviceMetaPeriod is 25 seconds. Since the regenerate for one of my high volume indexes was taking in excess of 22 seconds incoming data was only being serviced by the indexer pipeline for around 3 seconds every 25 seconds. That was causing all my indexqueue congestion.

Example (from local indexes.conf):

[default]
serviceMetaPeriod = 150

View solution in original post

0 Karma

oofaustoo
Explorer

Setting serviceMetaPeriod in /opt/splunk/etc/system/local/indexes.conf solved the problem (thanks to Splunk Support for the solution!). The default serviceMetaPeriod is 25 seconds. Since the regenerate for one of my high volume indexes was taking in excess of 22 seconds incoming data was only being serviced by the indexer pipeline for around 3 seconds every 25 seconds. That was causing all my indexqueue congestion.

Example (from local indexes.conf):

[default]
serviceMetaPeriod = 150

0 Karma
Get Updates on the Splunk Community!

Join Us for Splunk University and Get Your Bootcamp Game On!

If you know, you know! Splunk University is the vibe this summer so register today for bootcamps galore ...

.conf24 | Learning Tracks for Security, Observability, Platform, and Developers!

.conf24 is taking place at The Venetian in Las Vegas from June 11 - 14. Continue reading to learn about the ...

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...