Splunk Enterprise

Adding more RAM to the Search Heads recommendations

glpadilla_sol
Path Finder

Hello everyone,

 

We found that the VM has more RAM allocated currently we have 12GB of RAM for each Search Head, but that can be increase to 48GB of RAM.

I have been reading and increase the capacity of the Search Heads can affect the indexer nodes: An increase in search tier capacity corresponds to increased search load on the indexing tier, requiring scaling of the indexer nodes. Scaling either tier can be done vertically by increasing per-instance hardware resources, or horizontally by increasing the total node count.

And that make sense, currently the environment has more searches than indexers and I think increase the capacity of the SH can overwhelm the  indexers.

Current environment: 3 SH (cluster) and 2 Indexers(cluster).

I would appreciate any recommendation to do this as good as possible and be able to use the memory allocated.

 

Kind Regards.

Labels (1)
0 Karma

VatsalJagani
SplunkTrust
SplunkTrust

@glpadilla_sol - Always remember before you make any resource-related changes to the existing environment, look at all resources-related dashboards on the Monitoring console for all instances and see which part is creating the bottleneck.

There could be Disk (IO and IO wait), CPU, or Memory creating a bottleneck on the Indexers. CPU (in most cases) and Memory in the case of Search Head.

So, look at all the charts and see which instance is having trouble and which resources (CPU, Memory or Disk) having issues.

 

I hope this helps!!!

0 Karma

glpadilla_sol
Path Finder

Hello @VatsalJagani thank you for the input.

Yes, this concern was because a couple of weeks we had some alerts about memory usage in one search head (root cause a user running hard searches), but during the troubleshooting we notice that we have A LOT of unused memory just sitting unallocated on our servers that are hosting the VMs, we want to use them but first we want to be sure that is not going to generate further issues at the environment.

Ok regarding the indexers I have noticed some issues with IOWait and I have try to identify the root cause but no luck there I created this post a few months ago: https://community.splunk.com/t5/Splunk-Enterprise/IOWait-Resource-usage/m-p/578077

 

So for now I can think there are not issues adding memory to the search heads.

Kind Regards.

0 Karma

glpadilla_sol
Path Finder

Thank you so much @jamie00171 , so as a summary shouldn't be issues increasing the RAM at the Seach heads from 12GB to 48GB.

And just one question:

Currently the CPU capacity for the indexers is 48CPU/Cores each and for the Search Heads is 16CPU/Cores, so taking what you said the number of searches can be 66. So what should the CPU capacity at the indexers?

We followed the Splunk recommendations setting up the environment: https://docs.splunk.com/Documentation/Splunk/8.2.1/Capacity/Referencehardware 

 

0 Karma

isoutamo
SplunkTrust
SplunkTrust

You should remember that every search reserves one (v)CPU from every indexes which are participating that search. Also you should calculate couple of CPU on indexers for indexing data at same time and other "house keeping" processes. Based on those you have ~40 CPU for searches on indexer side. 

The reality depends what kind of searches you really have. If your searches use more transforming commands than streaming then the most of those have been running on SH side instead of indexers. That also affects the memory requirements on SH side.

I have seen some discussions/rule of thumbs where has said that there should be something like 1 SH vs. 7 idx for optimal performance.

But as @VatsalJagani said, you should use MC to monitor and estimate how well your environment is performing and if there is some issues or risk that those arise in near future. 

r. Ismo

0 Karma

jamie00171
Communicator

Hi @glpadilla_sol 

I believe "An increase in search tier capacity corresponds to increased search load on the indexing tier, requiring scaling of the indexer nodes." will come from the fact that the total number of searches a SH or cluster can execute at anyone time is determined by the number of CPUs the search head and cluster as a whole have.

Using the defaults each search head can execute base_max_searches (6) + number of vCPUs on host x max_searches_per_cpu. These settings are from limits.conf. There are further details for controlling percentage of this value assigned to scheduled and real time searches but I'll put ignore that for now because it isn't relevant here. 

 

So if you had 3 search heads each with 8 vCPUs you cluster could execute the following at the same time:

= 6 + 8 x 1 = 14 searches x 3 SHs = 42 searches

 

Typically each search takes a single vCPU on each indexer (assuming the data is evenly spread and every indexer responds to every search, which we'd expect) so you'd probably want at least 40/50vCPUs (probably more to account for indexing) on your indexers, if the limits above are being reached in your environment, if you'll never hit the 42 searches executing concurrently then you don't need as many cores on the indexers.

 

Now , if you were to increase the vCPUs on the search heads then you would increase the total number of concurrent searches (unless you changed the values in limits.conf above) that can execute in your environment therefore potentially putting more load onto the indexers  and if they didn't have enough vCPUs to handle this increase in search load then it could lead to different issues.

Regarding memory, I can't see any reason why increasing the memory on the search head increases the load on the indexers. Your search heads might typically use more memory for searches since that is where the reduce stage of the search occurs so searches that use | dedup or similar can be memory heavy and in that case it may make sense to add more memory to your SHs, if you are typically hitting high percentages of memory usage which you could check via the _introspection index.

 

Thanks, 

Jamie

 

 

0 Karma
Get Updates on the Splunk Community!

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...