Monitoring Splunk

indexer peers and indexer clustering problem

KhalidAlharthi
Explorer

Hello Members,

 

i have problems between the peers and managing node (CM), I tried to identify the issue but i canno't find a possible way to fix it because i didn't notice any problems regarding the connectivity 

 

see the pic below

 

KhalidAlharthi_0-1726045838064.png

 

KhalidAlharthi_1-1726045885243.png

 

 

Labels (2)
0 Karma

KhalidAlharthi
Explorer

how can i solve it the disk volume high but how can i ensure the data can be aligned is there commands or something to check ?

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @KhalidAlharthi ,

when you'll give sufficient disk space, indexers should be automatically aligned, even if some time will be required.

You could check the replication status, after some time from the Cluster Master.

Cluster Master gives you the feature to force rebalancing.

Ciao.

Giuseppe

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @KhalidAlharthi ,

the indexing queue is full, probably because you don't have enough disk space or there are too data for the resources Indexers have.

Ciao.

Giuseppe

0 Karma

KhalidAlharthi
Explorer

i have checked the main partitions of the system and hot/cold/frozen partition they have enough space and i think it's not the issue...

 

Thanks @gcusello 

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @KhalidAlharthi ,

if you have enough disk space the issue could be related to the resources of the Indexers: 

have you performant disks on your Indexers?

Splunk requires at least 800 IOPS (better 1200 or more!), and this is the bottleneck of each Splunk installation.

If you are using a shared virtual infrastructure, are the resources of the Splunk servers dedicated to them or shared?

They must be dedicated not shared.

Ciao.

Giuseppe

0 Karma

KhalidAlharthi
Explorer

thanks @gcusello  for responding,

 

i didn't miss up with disk storage or add any additional partitions .. last week i performed a new index from the CM and push it through indexers ...

 

about IOPS i don't know how can i check that using splunk

 

for the virtual infrastructure splunk has it's own configuration and not shared with other resources ... (Vsphere)

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @KhalidAlharthi ,

you can check IOPS using an external tool as Bonnie++ or others.

Abour resource sharing,  it is a configuration in VM-Ware, even if these machines are only for Splunk but they are in a VM-Ware infrastructure where there are other VMs, it's required by Splunk that they must be dedicated (https://docs.splunk.com/Documentation/Splunk/9.3.0/Capacity/Referencehardware#Virtualized_Infrastruc... ).

Anyway, probably the issue is in the performaces of your virtual storage.

Then how many logs (daily average) are you indexing?

How many Indexers are you using and how many CPUs are there in each Indexer?

Splunk requires at least 12 CPUs for each Indexer and more if there's ES or ITSI, then you can index max 200 GB/day with one indexer (less if you have ES or ITSI), so it's relevant how many logs are you indexing.

Ciao.

Giuseppe

0 Karma

KhalidAlharthi
Explorer

for today this is the volume used

KhalidAlharthi_0-1726395756770.png

there are 3 indexers each one of them has 16 CPU's

0 Karma

dural_yyz
Builder

Load the Monitoring Console

Indexing -> Performance -> Indexing Performance: Instance

Select various Indexers in your cluster to compare

- If various Indexers have massively different queue values then you may have a data imbalance, since UF's by default stick to an ingestion queue for 30 seconds you should observe this over time.

- If all queues left to the right are full then this is a disk write issue, the indexer can't write to disk fast enough.

- You can via .conf settings override default indexer queue and pipeline settings to increase available size, but you should be very confident in your admin abilities and I don't recommend this for novice administrators.  Working with Splunk support is recommended regardless of your experience novice or advanced. 

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @KhalidAlharthi ,

ok, it shouldn't be a resource issue .

The only possibility is the throughput of the disks, that you can check only with an external tool like Bonnie++.

Could you check the resources of your indexers using the Monitoring Console?

Please check if the resources are fully used.

Then, you could try to configure the parallel pipeline on your indexers, for more infos see at https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/Pipelinesets

you could try to use the value 

parallelIngestionPipelines = 2

in the General stanza of server.conf, in this way you better use your hardware resources.

Ciao.

Giuseppe

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...