Getting Data In

How to index data with multiple forwarders on the same host?

nealw
New Member

Hello,

I googled around for similar questions but could not find anything, so I'm sorry if this question has already been asked before. If i want to index large amounts of data using multiple forwarders, is there some way where i can configure the various forwarders to act in a distributed fashion? I know of managing pipelines for index parallelization (https://docs.splunk.com/Documentation/Splunk/7.1.2/Indexer/Pipelinesets), but that still does not quite solve the issue.

What do people in general do to solve such a problem? Thank you!

0 Karma
1 Solution

adonio
Ultra Champion
0 Karma

adonio
Ultra Champion

hello there,
very detailed answer by @mmodestino_splunk here:
https://answers.splunk.com/answers/521464/how-to-run-multiple-universal-forwarders-on-a-sing.html

hope it helps

0 Karma

nealw
New Member

Clearly I did not dig around enough. Thank you so much for your help!

0 Karma

ddrillic
Ultra Champion

Right @adonio, so we can install multiple forwarders on the same physical machine which is nice. The major question was -

-- is there some way where i can configure the various forwarders to act in a distributed fashion?

How do we do that - distributed fashion?

0 Karma

adonio
Ultra Champion

@ddrillic, indeed, it was late in the night when answered the question.
can you elaborate on the question? what does "distributed fashion" means?

0 Karma

nealw
New Member

I think, or at least this was what I was trying to get at - more like, in parallel. There are some ways of installing multiple universal forwarders and forwarding data to an intermediate forwarder and filtering out duplicate data, or just assigning the universal forwarders different directories to forward data from. But can we deploy universal forwarders in such a way that they coordinate and work in parallel? More like how multiple pipelines work, for index parallelization.
Thanks!

0 Karma

adonio
Ultra Champion

i hope i understand where you are getting at,
the forwarders are set to do what they being told to do. assuming you have 2 forwarders on a single machine, and these forwarders are monitoring 2 unique directories, forwarder A monitors directory A and forwarder B, monitors directory B. let say, now there is double of the amount of local data being written to directory A and no more data being written to directory B, I am not aware of (and i am 99% sure there isnt) a way forwarder B will "help" forwarder A and share the monitoring load of directory A
@nealw, hope this answers your question

0 Karma

nealw
New Member

Ah, i see. Well, that does answer what I was trying to get at. Thanks a lot!

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...