Getting Data In

How to improve forwarding performance when importing data from Hadoop?

eylonronen
Explorer

Hi all, we have a big problem with our forwarder.
We need to be able to index about 600GB/day and we have 10 indexers, 1 forwarder, and as of now we index about 260GB/day. Our license allows us to index this many.
We have two problems:
1. Because we import our data from hadoop, we cant have many forwarders, because they will monitor the same directories. However, we logically split our data into two groups, and we are trying to add another forwarder, but he wont connect to the indexers. We copied and renamed the deployment app and created a serverclass in the deployment server for the new forwarder, and now aside from the different inputs, both the new and the old forwarder have the same configuration, but the new one refuses to connect.

  1. We just cant troubleshoot the problematic part. We tried using the monitoring console, and even searching the logs directly, and there are no full queues. Our max kbPerSec is 800mb which is more than enough. I believe the problem lies in our forwarder because we see the slow pace in him, its not like he's forwarding fast enough and the indexers are the problem. At the forwarder we are extracting timestamp from field, a field using regex, and index name with transforms. What we would like is to find the bottleneck so we can index as fast as we need.
Get Updates on the Splunk Community!

Community Content Calendar, November Edition

Welcome to the November edition of our Community Spotlight! Each month, we dive into the Splunk Community to ...

October Community Champions: A Shoutout to Our Contributors!

As October comes to a close, we want to take a moment to celebrate the people who make the Splunk Community ...

Stay Connected: Your Guide to November Tech Talks, Office Hours, and Webinars!

What are Community Office Hours? Community Office Hours is an interactive 60-minute Zoom series where ...