Monitoring Splunk

Splunk application is crashing

robert_hausmann
New Member

Hi,

since two days my splunk application is crashing 5 minutes after starting the application.

In the Splunkd.log I get the following error description:
03-27-2020 13:00:16.418 +0100 ERROR StreamGroup - failed to drain remainder total_sz=59 bytes_freed=25106 avg_bytes_per_iv=425 sth=0x7fcb703feca0: [1585310416, /opt/splunk/var/lib/splunk/audit/db/hot_v1_27, 0x7fcb7570c650] reason=st_sync failed rc=-6 warm_rc=[-4,28]
03-27-2020 13:00:16.434 +0100 ERROR StreamGroup - failed to add corrupt marker to dir=/opt/splunk/var/lib/splunk/audit/db/hot_v1_27 errno=No space left on device
03-27-2020 13:00:16.454 +0100 ERROR BTreeCP - addUpdate: IOException caught: BTree::Exception: Record::writeLE failure in Node::_leafAddUpdate node offset: 24 order: 255 keys:
03-27-2020 13:00:16.466 +0100 ERROR BTreeCP - failed: failed to mkdir /opt/splunk/var/lib/splunk/fishbucket/splunk_private_db/corrupt: No space left on device
03-27-2020 13:00:16.466 +0100 ERROR IndexWriter - failed to add corrupt marker to path='/opt/splunk/var/lib/splunk/audit/db/hot_v1_27' (No space left on device)

I checked the space on the disk and the disk areas are only 28 % in use. So there should be enough space on the disk.

Does anyone have an ideas what is the issue of this problem.

Thank you 🙂

Labels (2)
Tags (2)
0 Karma

PavelP
Motivator

Can you post output of

df -h

and

df -i

Do not redact or change anything if possible

0 Karma

robert_hausmann
New Member

output of df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 16G 8.0K 16G 1% /dev
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 16G 644K 16G 1% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/xvda2 93G 26G 67G 28% /
/dev/xvda2 93G 26G 67G 28% /tmp
/dev/xvda1 976M 31M 895M 4% /boot
/dev/xvda2 93G 26G 67G 28% /global/local
/dev/xvda2 93G 26G 67G 28% /var/opt
/dev/xvda2 93G 26G 67G 28% /var/tmp
/dev/xvda2 93G 26G 67G 28% /opt
/dev/xvda2 93G 26G 67G 28% /.snapshots
/dev/xvda2 93G 26G 67G 28% /var/log
/dev/xvda2 93G 26G 67G 28% /var/spool
/dev/xvda2 93G 26G 67G 28% /home
/dev/xvda2 93G 26G 67G 28% /lfs
/dev/xvda2 93G 26G 67G 28% /var/cache

output df -i

Filesystem Inodes IUsed IFree IUse% Mounted on
devtmpfs 4026057 369 4025688 1% /dev
tmpfs 4027707 2 4027705 1% /dev/shm
tmpfs 4027707 497 4027210 1% /run
tmpfs 4027707 16 4027691 1% /sys/fs/cgroup
/dev/xvda2 0 0 0 - /
/dev/xvda2 0 0 0 - /tmp
/dev/xvda1 65536 324 65212 1% /boot
/dev/xvda2 0 0 0 - /global/local
/dev/xvda2 0 0 0 - /var/opt
/dev/xvda2 0 0 0 - /var/tmp
/dev/xvda2 0 0 0 - /opt
/dev/xvda2 0 0 0 - /.snapshots
/dev/xvda2 0 0 0 - /var/log
/dev/xvda2 0 0 0 - /var/spool
/dev/xvda2 0 0 0 - /home
/dev/xvda2 0 0 0 - /lfs
/dev/xvda2 0 0 0 - /var/cache

0 Karma

PavelP
Motivator

Hi Robert

have you found the problem?
Are the pathes in the crash.log always the same?
Can you disable all inputs and receiving ports and restart the machine?

0 Karma
Take the 2021 Splunk Career Survey

Help us learn about how Splunk has
impacted your career by taking the 2021 Splunk Career Survey.

Earn $50 in Amazon cash!