Monitoring Splunk

Parallelization: Should the limits be changed on the search heads, indexers, or both?

jason0
Path Finder

Hello,

I am looking at https://docs.splunk.com/Documentation/Splunk/9.0.0/Capacity/Parallelization and was wondering which systems to make changes on.

For instance: batch parallelization: should the limits be changed on the search heads, indexers, or both?

same question for datamodels, report acceleration and indexer parallelization.

Oh for what it's worth, I am running splunk enterprise 9, on a C1/C11 deployment.

-jason

 

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

1st what are the issues, which you are trying to solve with changing these? If you haven't any issue, don't generate those with changing these values!

To utilising those it needs that you have additional, unused capacity on your Indexer layer (at least cpu, iops, throughput and memory).

r. Ismo

 

jason0
Path Finder

Hello,

The problem is a combination of iowaits showing yellow and red and my suspicion that not all of my cpu/memory/iops are really being used.  This really came about when I bumped from splunk enterprise 8.0.3 -> 8.2.7 -> 9.0.0.1 last month.  I have seen in other postings that the iowait health check is an addition that would be seen in the transistion to 8.2.7, and is more of a nuisance than anything really useful.

 

I realize I COULD increase the iowait thresholds, but it occured to me perhaps it's more of a question of what cpus that  are being used are super busy.  Thus, I learned about parallelization.

my indexers each have:

  • 80 cpus
  • 256 Gigabytes of memory
  • hot/warm ssd array (raid5, Bleah) whose read iowaits average about 31k, and write iowaits 10k (using fio on running system, so that's during indexing...)
  • cold sas array (raid5, again bleah) whose read iowaits average about 6k, and write iowaits around 2k.

Thus far the only thing I have done is increase ParallelIngestionPipelines to 2 on my indexers.  I actually did this yesterday.

So while this morning I have seen a load average of 10.5, I know this means almost nothing given 80 cpus and ZERO iowaits.  I see the zero waits using the "top" command as well as "vmstat -2".  the sar command shows individual cpus occasionally higher than 0.2.

--jason

 

 

 

0 Karma

bowesmana
SplunkTrust
SplunkTrust

Only know about indexer parallelisation - that goes on indexers. I've worked with use cases, where 3 was used.

 

 

burwell
SplunkTrust
SplunkTrust

So you enable or disable batch search mode on the heads in limits.conf. By default it is enabled.

https://docs.splunk.com/Documentation/Splunk/9.0.0/Knowledge/Configurebatchmodesearch

And you set the number of pipelines on your indexers. 

https://docs.splunk.com/Documentation/Splunk/9.0.0/Knowledge/Configurebatchmodesearch#Configure_batc...

 

 

jason0
Path Finder

Thanks!

 

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...