Getting Data In

Errors casuing unusable Indexer Linux VMs, slowdowns for entire vcenter cluster until powered off

mevangelisti
New Member

We are seeing two types of dmesg errors on the linux VMs which are acting as our indexers:

1) “task blocked for more than 120 seconds, hung_task_timeout_secs, call trace” - https://del.dog/120blk.txt

2) “sd 0:0:0:0: [sda] task abort on host 0” - https://del.dog/tskabrt.txt

These issues seem to be also causing high I/O for all VMs on the same vcenter cluster that the indexers are on. As soon as we get the indexers powered down, performance is restored across the cluster.

Things I've tried:
-Full yum update
-Disabling Huge Pages
-Tried booting with older kernel two versions back, and latest ones.
-Initially we were on RDM backed by EMC VNX SAN for hot space. This was converted to VMDK (still backed by VNX).
-Initially the hot drives were thin. They were converted to Eager Thick.
-Initially the hot drives were formatted with XFS. I have migrated them to EXT4.
-I tried tuning system cacheing / flushes per this explanation: https://www.blackmoreops.com/2014/09/22/linux-kernel-panic-issue--fix-hung_task_timeout_secs-blocked...

0 Karma

williaml_splunk
Splunk Employee
Splunk Employee

Is it helpful to resolve this blocked issue with the two parameters?

vm.dirty_background_ratio = 5
vm.dirty_ratio = 10

 

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...