Monitoring Splunk

ERROR Archiver - Unable to write due to: No space left on device

michaelbang1
New Member

I am trying to troubleshoot an issue with a clustered search head restarting itself and came across an error message in the _internal logs that is puzzling. There are about 50 of these type of messages around the time of the Splunk service going down on the search head:

-400 ERROR Archiver - >>> Unable to write due to: No space left on device

I have checked the disk space on the search head and everything is well within limits. I have also checked permissions for the /opt/splunk folder to make sure there is read/write/execute access as non-root.

Does anyone have any idea what this error message means and if so, ideas on how to fix this issue?

Labels (2)
0 Karma

codebuilder
SplunkTrust
SplunkTrust

It's likely you've exceeded the maxTotalDataSizeMB on your index(es).
Unless you have explicitly set a higher value within indexes.conf, the default value is 500GB (or you have set a lower value).
That value is irrespective of available storage.

maxTotalDataSizeMB = <nonnegative integer>
* The maximum size of an index, in megabytes.
* If an index grows larger than the maximum size, splunkd freezes the oldest
  data in the index.
* This setting only applies to hot, warm, and cold buckets. It does
  not apply to thawed buckets.
* CAUTION: This setting takes precedence over other settings like
  'frozenTimePeriodInSecs' with regard to data retention. If the index
  grows beyond 'maxTotalDataSizeMB' megabytes before
  'frozenTimePeriodInSecs' seconds have passed, data could prematurely
  roll to frozen. As the default policy for rolling data to frozen is
  deletion, unintended data loss could occur.
* Splunkd ignores this setting on remote storage enabled indexes.
* Highest legal value is 4294967295
* Default: 500000

indexes.conf

----
An upvote would be appreciated and Accept Solution if it helps!
0 Karma

michaelbang1
New Member

I'm not 100% convinced that your answer is the reason why I was seeing these errors for the SH internal logs and for it to be a reason why the SH restarted itself.

However, you pointing out maxTotalDataSizeMB helped me to identify that my indexes were hitting max and needed some attention.

Thank you for the pro tip!

0 Karma

gcusello
Legend

Hi michaelbang1,
did you checked also Indexers disk space?

Bye.
Giuseppe

0 Karma

michaelbang1
New Member

Hi Giuseppe,

Yes, I just checked the indexers disk space from your comment and everything is within limits. Any other suggestions or areas to investigate would be appreciated.

Mike

0 Karma

gcusello
Legend

it's a blind search!
did you checked the number of open files (ulimit)?
You can check if the number is correct also using the Monitoring Console.

Bye.
Giuseppe

0 Karma

richgalloway
SplunkTrust
SplunkTrust

I think I've seen that message on indexers where the coldToFrozenDir was full, but not on a SH.

---
If this reply helps you, an upvote would be appreciated.
0 Karma

michaelbang1
New Member

Hi richgalloway,

I checked my indexers and their disk space usage is within limits. I am using the default frozen bucket settings for my older data, so anything that goes from cold to frozen gets deleted.

Any suggestions or areas to investigate would be appreciated.

Mike

0 Karma
Get Updates on the Splunk Community!

Maximize the Value from Microsoft Defender with Splunk

<P style=" text-align: center; "><span class="lia-inline-image-display-wrapper lia-image-align-center" ...

This Week's Community Digest - Splunk Community Happenings [6.27.22]

<FONT size="5"><FONT size="5" color="#FF00FF">Get the latest news and updates from the Splunk Community ...