Deployment Architecture

How to increase the number of pipeline sets in Heavy forwarder on a distributed environment?

Hemnaath
Motivator

Hi Folks ,

We would like to increase the number of pipeline set in the heavy forwarder to ensure data ingestion meets the requirement of 2 pipelines for every 1 indexer. Our is distributed Environment with 5 individual Forwarder, 5 individual Indexer, 3 search head cluster and thousands of UF configured.

Based on the above requirement, can I add this stanza to increase the performance

Path: /opt/splunk/etc/apps/HF/local/server.conf

[general]
parallelIngestionPipelines = 2

Kindly guide me on this.

0 Karma
1 Solution

adonio
Ultra Champion

hello there,

that looks good.
would recommend to control you HF layer form one location - Deployment Server.
create a small app with that server.conf and distribute to HF layer
detailed docs here:
http://docs.splunk.com/Documentation/Splunk/7.0.3/Indexer/Pipelinesets

hope it helps

View solution in original post

0 Karma

adonio
Ultra Champion

hello there,

that looks good.
would recommend to control you HF layer form one location - Deployment Server.
create a small app with that server.conf and distribute to HF layer
detailed docs here:
http://docs.splunk.com/Documentation/Splunk/7.0.3/Indexer/Pipelinesets

hope it helps

0 Karma

Hemnaath
Motivator

thanks adonio, Let me try this ! hey similarly I want to implement the search parallelization "Batch mode search parallelization" but not sure where I need to configure this setting ? I had already posted this question in splunk answers.com Can you please guide me on this.

https://answers.splunk.com/answers/639780/need-help-on-search-parallelization-how-and-where.html?min...

0 Karma

adonio
Ultra Champion

sure thing,
for some reason the link doesnt work for me.
ill look for the question.
in the meantime, if it answers your question, please accept to close this post
thanks!

0 Karma

Hemnaath
Motivator

Hi Adonio, I have successfully updated the above stanza in the server.conf in all the Heavy forwarder under this path /opt/splunk/etc/apps/Test_HF_APP/default/server.conf

[queue]
maxSize = 200MB

[general]
parallelIngestionPipelines = 2

But now need to monitor the performance, so where I can check and monitor the performance.
Kindly guide me on this.

0 Karma

adonio
Ultra Champion

@Hemnaath, when you say "performance" what exactly do you mean?
can you be more specific regarding the metrics you would like to measure?
do you leverage the Monitoring Console today?
also, if the above answer solved your cases, please mark question as "answered" so others will know this is a valid solution

0 Karma

Hemnaath
Motivator

hey from the splunk documentation, index parallelization setup is used to boost the indexing throughput capacity and handle burst amount of data. So I had implemented the setting but I need to know whether index parallelization is working fine or not. I mean " When you implement two pipeline sets, you have two complete processing pipelines, from the point of data ingestion to the point of writing events to disk. The pipeline sets operate independently of each other, with no knowledge of each other's activities. "

Where/How I can check this out?

Kindly guide me on this.

0 Karma
Get Updates on the Splunk Community!

How to Monitor Google Kubernetes Engine (GKE)

We’ve looked at how to integrate Kubernetes environments with Splunk Observability Cloud, but what about ...

Index This | How can you make 45 using only 4?

October 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...

Splunk Education Goes to Washington | Splunk GovSummit 2024

If you’re in the Washington, D.C. area, this is your opportunity to take your career and Splunk skills to the ...