Archive

maxkbps on 10gb network

Builder

Hi All,

So I'm in the process of fixing some huge delays in my data ingestion (some data takes up to 2 hours+) to be searchable . I've adjusted the maxkbps = 4096 in my forwarders limits.conf

I can see Splunkd it's now transferring 9,000,000 bytes per second (9MB's) and has drastically reduced the lag in data ingestion. Splunkd is now easily the highest network traffic generating process on the server. With that said in my internal logs shows it's still reaching the max 4096 limit and throttles the flow of logs to the indexer.

There's about 8 webservers that will generate this amount of traffic. There's 2 indexers, where the forwarders data is divided up by sourcetype via tcp routing

With a 10gb network (forwarders and indexers) shouldn't I be able to double this max to 8192 and not worry about the network choking? So theoretically if each server is maxing out that would be a constant flow of 18MB/s. 18 * 8 = 144MB/S

a 10gbps network can handle 1250MB/s do you think this is feasible to do? Would the bottle neck be my indexers ability to write the data fast enough?

Originally parsing queue was an issue however I've upped it.
Parsing queue = 30MB in my forwarders server.conf and doesn't seem to be a issue anymore.

Someone who's ingesting a LOT of data please chime in with your setup/ numbers.

1 Solution

Splunk Employee
Splunk Employee

Unless you have a reason for throttling the maxkbps, you could set it to unlimited (maxkbps=0). If it is a really high velocity source of data (a lot of data in short period of time), you might consider multiple pipelines on your forwarders.

https://docs.splunk.com/Documentation/Forwarder/7.3.1/Forwarder/Configureaforwardertohandlemultiplep...

View solution in original post

0 Karma

Splunk Employee
Splunk Employee

Unless you have a reason for throttling the maxkbps, you could set it to unlimited (maxkbps=0). If it is a really high velocity source of data (a lot of data in short period of time), you might consider multiple pipelines on your forwarders.

https://docs.splunk.com/Documentation/Forwarder/7.3.1/Forwarder/Configureaforwardertohandlemultiplep...

View solution in original post

0 Karma

Builder

I added the 2nd pipeline to one of the servers yesterday. Making these adjustment (the ones mentioned above) has definitely improved the issue with delays.

I found out today that increasing the maxkbps to 8192 doesn't have a negative effect on the network as 144 MB/s transfers can't compare to some of the network traffic we see coming from SQL. I feel comfortable making these changes now.

0 Karma

Builder

I added the 2nd pipeline to one of the servers yesterday. Making these adjustment (the ones mentioned above) has definitely improved the issue with delays.

I found out today that increasing the maxkbps to 8192 doesn't have a negative effect on the network as 144 MB/s transfers can't compare to some of the network traffic we see coming from SQL. I feel comfortable making these changes now.

0 Karma