I googled around for similar questions but could not find anything, so I'm sorry if this question has already been asked before. If i want to index large amounts of data using multiple forwarders, is there some way where i can configure the various forwarders to act in a distributed fashion? I know of managing pipelines for index parallelization (https://docs.splunk.com/Documentation/Splunk/7.1.2/Indexer/Pipelinesets), but that still does not quite solve the issue.
What do people in general do to solve such a problem? Thank you!
Right @adonio, so we can install multiple forwarders on the same physical machine which is nice. The major question was -
-- is there some way where i can configure the various forwarders to act in a distributed fashion?
How do we do that - distributed fashion?
I think, or at least this was what I was trying to get at - more like, in parallel. There are some ways of installing multiple universal forwarders and forwarding data to an intermediate forwarder and filtering out duplicate data, or just assigning the universal forwarders different directories to forward data from. But can we deploy universal forwarders in such a way that they coordinate and work in parallel? More like how multiple pipelines work, for index parallelization.
i hope i understand where you are getting at,
the forwarders are set to do what they being told to do. assuming you have 2 forwarders on a single machine, and these forwarders are monitoring 2 unique directories, forwarder A monitors directory A and forwarder B, monitors directory B. let say, now there is double of the amount of local data being written to directory A and no more data being written to directory B, I am not aware of (and i am 99% sure there isnt) a way forwarder B will "help" forwarder A and share the monitoring load of directory A
@nealw, hope this answers your question