Getting Data In

Errors casuing unusable Indexer Linux VMs, slowdowns for entire vcenter cluster until powered off

mevangelisti
New Member

We are seeing two types of dmesg errors on the linux VMs which are acting as our indexers:

1) “task blocked for more than 120 seconds, hung_task_timeout_secs, call trace” - https://del.dog/120blk.txt

2) “sd 0:0:0:0: [sda] task abort on host 0” - https://del.dog/tskabrt.txt

These issues seem to be also causing high I/O for all VMs on the same vcenter cluster that the indexers are on. As soon as we get the indexers powered down, performance is restored across the cluster.

Things I've tried:
-Full yum update
-Disabling Huge Pages
-Tried booting with older kernel two versions back, and latest ones.
-Initially we were on RDM backed by EMC VNX SAN for hot space. This was converted to VMDK (still backed by VNX).
-Initially the hot drives were thin. They were converted to Eager Thick.
-Initially the hot drives were formatted with XFS. I have migrated them to EXT4.
-I tried tuning system cacheing / flushes per this explanation: https://www.blackmoreops.com/2014/09/22/linux-kernel-panic-issue--fix-hung_task_timeout_secs-blocked...

0 Karma

williaml_splunk
Splunk Employee
Splunk Employee

Is it helpful to resolve this blocked issue with the two parameters?

vm.dirty_background_ratio = 5
vm.dirty_ratio = 10

 

0 Karma
Get Updates on the Splunk Community!

Splunk + ThousandEyes: Correlate frontend, app, and network data to troubleshoot ...

 Are you tired of troubleshooting delays caused by siloed frontend, application, and network data? We've got a ...

Splunk Observability for AI

Don’t miss out on an exciting Tech Talk on Splunk Observability for AI!Discover how Splunk’s agentic AI ...

🔐 Trust at Every Hop: How mTLS in Splunk Enterprise 10.0 Makes Security Simpler

From Idea to Implementation: Why Splunk Built mTLS into Splunk Enterprise 10.0  mTLS wasn’t just a checkbox ...