Knowledge Management

How frequently should bundles be pushed out?

a212830
Champion

Hi,

I noticed that our bundles are getting warning errors, and then I realized that they are getting pushed out every minute, which seems very excessive to me. Is that typical?

04-10-2018 20:03:15.639 +0000 WARN DistributedBundleReplicationManager - Asynchronous bundle replication to 50 peer(s) succeeded; however it took too long (longer than 10 seconds): elapsed_ms=22005, tar_elapsed_ms=2285, bundle_file_size=193570KB, replication_id=1523390573, replication_reason="async replication allowed"
04-10-2018 20:04:47.973 +0000 WARN DistributedBundleReplicationManager - Asynchronous bundle replication to 50 peer(s) succeeded; however it took too long (longer than 10 seconds): elapsed_ms=32325, tar_elapsed_ms=2675, bundle_file_size=193570KB, replication_id=1523390655, replication_reason="async replication allowed"
04-10-2018 20:06:10.042 +0000 WARN DistributedBundleReplicationManager - Asynchronous bundle replication to 50 peer(s) succeeded; however it took too long (longer than 10 seconds): elapsed_ms=22062, tar_elapsed_ms=1991, bundle_file_size=193570KB, replication_id=1523390747, replication_reason="async replication allowed"
04-10-2018 20:07:32.498 +0000 WARN DistributedBundleReplicationManager - Asynchronous bundle replication to 50 peer(s) succeeded; however it took too long (longer than 10 seconds): elapsed_ms=22449, tar_elapsed_ms=2353, bundle_file_size=193570KB, replication_id=1523390830, replication_reason="async replication allowed"
04-10-2018 20:09:02.359 +0000 WARN DistributedBundleReplicationManager - Asynchronous bundle replication to 50 peer(s) succeeded; however it took too long (longer than 10 seconds): elapsed_ms=29853, tar_elapsed_ms=2242, bundle_file_size=193570KB, replication_id=1523390912, replication_reason="async replication allowed"
04-10-2018 20:10:30.838 +0000 WARN DistributedBundleReplicationManager - Asynchronous bundle replication to 50 peer(s) succeeded; however it took too long (longer than 10 seconds): elapsed_ms=28474, tar_elapsed_ms=2620, bundle_file_size=193570KB, replication_id=1523391002, replication_reason="async replication allowed"
04-10-2018 20:11:58.072 +0000 WARN DistributedBundleReplicationManager - Asynchronous bundle replication to 50 peer(s) succeeded; however it took too long (longer than 10 seconds): elapsed_ms=27226, tar_elapsed_ms=2313, bundle_file_size=193570KB, replication_id=1523391090, replication_reason="async replication allowed"
04-10-2018 20:13:22.664 +0000 WARN DistributedBundleReplicationManager - Asynchronous bundle replication to 50 peer(s) succeeded; however it took too long (longer than 10 seconds): elapsed_ms=24587, tar_elapsed_ms=2430, bundle_file_size=193570KB, replication_id=1523391178, replication_reason="async replication allowed"
04-10-2018 20:14:45.469 +0000 WARN DistributedBundleReplicationManager - Asynchronous bundle replication to 50 peer(s) succeeded; however it took too long (longer than 10 seconds): elapsed_ms=22798, tar_elapsed_ms=2219, bundle_file_size=193570KB, replication_id=1523391262, replication_reason="async replication allowed"
04-10-2018 20:16:08.228 +0000 WARN DistributedBundleReplicationManager - Asynchronous bundle replication to 50 peer(s) succeeded; however it took too long (longer than 10 seconds): elapsed_ms=22752, tar_elapsed_ms=2070, bundle_file_size=193570KB, replication_id=1523391345, replication_reason="async replication allowed"
04-10-2018 20:17:30.129 +0000 WARN DistributedBundleReplicationManager - Asynchronous bundle replication to 50 peer(s) succeeded; however it took too long (longer than 10 seconds): elapsed_ms=21896, tar_elapsed_ms=2189, bundle_file_size=193570KB, replication_id=1523391428, replication_reason="async replication allowed"

0 Karma
1 Solution

masonmorales
Influencer

It's normal, although since your bundle size is almost 200 MB you may to consider configuring a value for:

[replicationSettings]
excludeReplicatedLookupSize = <int>
* Any lookup file larger than this value (in MB) will be excluded from the knowledge bundle that the search head replicates to its search peers.
* When this value is set to 0, this feature is disabled.
* Defaults to 0

See: http://docs.splunk.com/Documentation/Splunk/latest/Admin/Distsearchconf

If you tell Splunk to automatically exclude large lookup files from bundle replication, it'll complete quicker and you won't get warnings about it taking too long.

View solution in original post

masonmorales
Influencer

It's normal, although since your bundle size is almost 200 MB you may to consider configuring a value for:

[replicationSettings]
excludeReplicatedLookupSize = <int>
* Any lookup file larger than this value (in MB) will be excluded from the knowledge bundle that the search head replicates to its search peers.
* When this value is set to 0, this feature is disabled.
* Defaults to 0

See: http://docs.splunk.com/Documentation/Splunk/latest/Admin/Distsearchconf

If you tell Splunk to automatically exclude large lookup files from bundle replication, it'll complete quicker and you won't get warnings about it taking too long.

a212830
Champion

Getting pushed out every minute is normal? Just seems like huge over-kill to me...

0 Karma

sloshburch
Splunk Employee
Splunk Employee

I think the push is correlated with a search that needs the bundles. I think that's what @ssievert was getting at.

0 Karma

s2_splunk
Splunk Employee
Splunk Employee

Do you have any scheduled searches running every minute that would affect the contents of the bundle, like lookup file updates?

0 Karma
Get Updates on the Splunk Community!

Happy CX Day to our Community Superheroes!

Happy 10th Birthday CX Day!What is CX Day? It’s a global celebration recognizing innovation and success in the ...

Check out This Month’s Brand new Splunk Lantern Articles

Splunk Lantern is a customer success center providing advice from Splunk experts on valuable data insights, ...

Routing Data to Different Splunk Indexes in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. The OpenTelemetry project is the second largest ...