All Apps and Add-ons

Splunk Health Check Overview: Status health red indexer

Explorer

Hello Splunkers

We had a cluster with 1 Search Head, 1 Master, and 2 peer nodes.

In both indexer GUI show status red in bucket section with follow message:

Root Cause(s):
The percentage of small of buckets created (75) over the last hour is very high and exceeded the red thresholds (50) for index=sti, and possibly more indexes, on this indexer
Last 50 related messages:

06-05-2019 09:28:35.954 -0400 INFO HotBucketRoller - finished moving hot to warm bid=_internal~124~39B89B4A-2FD6-4223-B314-71FA16594755 idx=_internal from=hot_v1_124 to=db_1558206662_1557774770_124_39B89B4A-2FD6-4223-B314-71FA16594755 size=933888 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots
06-05-2019 09:18:15.649 -0400 INFO HotBucketRoller - finished moving hot to warm bid=_internal~123~39B89B4A-2FD6-4223-B314-71FA16594755 idx=_internal from=hot_v1_123 to=db_1557774740_1557342761_123_39B89B4A-2FD6-4223-B314-71FA16594755 size=1093632 caller=lru maxHotBuckets=3, count=18 hot buckets,evicting_count=15 LRU hots
06-05-2019 09:18:15.618 -0400 INFO HotBucketRoller - finished moving hot to warm bid=_internal~122~39B89B4A-2FD6-4223-B314-71FA16594755 idx=_internal from=hot_v1_122 to=db_1557342611_1556910653_122_39B89B4A-2FD6-4223-B314-71FA16594755 size=1077248 caller=lru maxHotBuckets=3, count=18 hot buckets,evicting_count=15 LRU hots
06-05-2019 09:18:15.571 -0400 INFO HotBucketRoller - finished moving hot to warm bid=_internal~114~39B89B4A-2FD6-4223-B314-71FA16594755 idx=_internal from=hot_v1_114 to=db_1556910593_1556478620_114_39B89B4A-2FD6-4223-B314-71FA16594755 size=1179648 caller=lru maxHotBuckets=3, count=18 hot buckets,evicting_count=15 LRU hots

We have validated and there are no events in future.

This affect indexing process or something like that?

0 Karma

Motivator

Your maxHotBuckets setting is too low for the amount of data you are ingesting. I can see from the logs you provided that the value you have set is 3. You should set this to a higher value in order to keep up with the incoming data. What's happening is that your cluster is churning on I/O as it indexes new data to hot, then almost immediately rolls it to warm.

You can adjust the setting at the index level within indexes.conf:

[index_name_here]
maxHotBuckets = integer_value

I would try setting this to 50, push out your updated indexes.conf via the master, then re-evaluate performance and log messages.

Explorer

One question maxHotBuckets for all indexes or one particular?

0 Karma

Motivator

I would start with just a single index, then verify. If that does not resolve it, then continue to increase the setting on additional indexes, one by one.

The hot bucket count only applies to new data that is actively being indexed. So you could start with your largest, most active index.

0 Karma

Explorer

doesn't work

[sti]
repFactor=auto
homePath=/u01/splunk/var/lib/splunk/sti/db/
coldPath=/u01/splunk/var/lib/splunk/sti/colddb/
thawedPath=/u01/splunk/var/lib/splunk/sti/thaweddb/
maxHotBuckets=50

0 Karma

Motivator

Can you expand on what is not working? Did you cycle the indexers?

0 Karma

Explorer

Cycling indexers with apply cluster-bundle. Please correct me if I'm wrong

0 Karma

Motivator

Your conf and process are correct. How are you validating that it "doesn't work"?

Can you provide the exact command that you used for "apply cluster-bundle"?
Everything that you've provided so far looks correct, but I'm guessing that a rolling restart of your indexers is required, but you may have missed it.

Depending on the structure of your command, Splunk may or may not inform you if a rolling restart is required.

0 Karma
State of Splunk Careers

Access the Splunk Careers Report to see real data that shows how Splunk mastery increases your value and job satisfaction.

Find out what your skills are worth!