Getting Data In

How to improve forwarding performance when importing data from Hadoop?

eylonronen
Explorer

Hi all, we have a big problem with our forwarder.
We need to be able to index about 600GB/day and we have 10 indexers, 1 forwarder, and as of now we index about 260GB/day. Our license allows us to index this many.
We have two problems:
1. Because we import our data from hadoop, we cant have many forwarders, because they will monitor the same directories. However, we logically split our data into two groups, and we are trying to add another forwarder, but he wont connect to the indexers. We copied and renamed the deployment app and created a serverclass in the deployment server for the new forwarder, and now aside from the different inputs, both the new and the old forwarder have the same configuration, but the new one refuses to connect.

  1. We just cant troubleshoot the problematic part. We tried using the monitoring console, and even searching the logs directly, and there are no full queues. Our max kbPerSec is 800mb which is more than enough. I believe the problem lies in our forwarder because we see the slow pace in him, its not like he's forwarding fast enough and the indexers are the problem. At the forwarder we are extracting timestamp from field, a field using regex, and index name with transforms. What we would like is to find the bottleneck so we can index as fast as we need.
Get Updates on the Splunk Community!

Data Management Digest – December 2025

Welcome to the December edition of Data Management Digest! As we continue our journey of data innovation, the ...

Index This | What is broken 80% of the time by February?

December 2025 Edition   Hayyy Splunk Education Enthusiasts and the Eternally Curious!    We’re back with this ...

Unlock Faster Time-to-Value on Edge and Ingest Processor with New SPL2 Pipeline ...

Hello Splunk Community,   We're thrilled to share an exciting update that will help you manage your data more ...