Getting Data In

Errors casuing unusable Indexer Linux VMs, slowdowns for entire vcenter cluster until powered off

mevangelisti
New Member

We are seeing two types of dmesg errors on the linux VMs which are acting as our indexers:

1) “task blocked for more than 120 seconds, hung_task_timeout_secs, call trace” - https://del.dog/120blk.txt

2) “sd 0:0:0:0: [sda] task abort on host 0” - https://del.dog/tskabrt.txt

These issues seem to be also causing high I/O for all VMs on the same vcenter cluster that the indexers are on. As soon as we get the indexers powered down, performance is restored across the cluster.

Things I've tried:
-Full yum update
-Disabling Huge Pages
-Tried booting with older kernel two versions back, and latest ones.
-Initially we were on RDM backed by EMC VNX SAN for hot space. This was converted to VMDK (still backed by VNX).
-Initially the hot drives were thin. They were converted to Eager Thick.
-Initially the hot drives were formatted with XFS. I have migrated them to EXT4.
-I tried tuning system cacheing / flushes per this explanation: https://www.blackmoreops.com/2014/09/22/linux-kernel-panic-issue--fix-hung_task_timeout_secs-blocked...

0 Karma

williaml_splunk
Splunk Employee
Splunk Employee

Is it helpful to resolve this blocked issue with the two parameters?

vm.dirty_background_ratio = 5
vm.dirty_ratio = 10

 

0 Karma
Get Updates on the Splunk Community!

The OpenTelemetry Certified Associate (OTCA) Exam

What’s this OTCA exam? The Linux Foundation offers the OpenTelemetry Certified Associate (OTCA) credential to ...

From Manual to Agentic: Level Up Your SOC at Cisco Live

Welcome to the Era of the Agentic SOC   Are you tired of being a manual alert responder? The security ...

Splunk Classroom Chronicles: Training Tales and Testimonials (Episode 4)

Welcome back to Splunk Classroom Chronicles, our ongoing series where we shine a light on what really happens ...