Getting Data In

Low throughput with high RTT Forwarder -> Indexer connection

donald_xero
Explorer

We're trying to push event data from a heavy forwarder to our central indexer over a VPN with a fairly high RTT (~180ms). Graphing the forwarder throughput log lines from splunkd shows that connection never gets above 50 kbytes/s. Connections from forwarders much closer to the indexer can easily push several megabytes per second to the indexer.

Is there some way we can tune the TCP settings in splunk? Running iperf on our indexer and forwarder, with the default Windows socket buffers we can only push ~700kbps, but if we push those buffers up to 1MB we get 1.5Mbps+. Is there any way to set those socket buffer sizes in splunk?

0 Karma
1 Solution

GKC_DavidAnso
Path Finder

Hi Donald

Have you tried tweaking the default TCP receive buffer on your Indexer? You should be able to do that at an O/S level: http://support.microsoft.com/kb/224829

If this doesn't work, an alternative might be to run a python based TCP proxy as a scripted input on your HWF. The scripted input won't actually process data into splunk, it is just a way for splunk to start a python process.

You should be able to configure pythons TCP send buffer as required, have the HWF send to the port that your TCP proxy is listening on and have it forward to your indexer.

Python TCP Proxy:
http://code.activestate.com/recipes/114642/

Python TCP Send Buffer:
http://nullege.com/codes/search/socket.SO_SNDBUF

I hope that helps.

View solution in original post

donald_xero
Explorer

This has now been resolved by migrating the heavy forwarder in question to RHEL.

0 Karma

GKC_DavidAnso
Path Finder

Hi Donald

Have you tried tweaking the default TCP receive buffer on your Indexer? You should be able to do that at an O/S level: http://support.microsoft.com/kb/224829

If this doesn't work, an alternative might be to run a python based TCP proxy as a scripted input on your HWF. The scripted input won't actually process data into splunk, it is just a way for splunk to start a python process.

You should be able to configure pythons TCP send buffer as required, have the HWF send to the port that your TCP proxy is listening on and have it forward to your indexer.

Python TCP Proxy:
http://code.activestate.com/recipes/114642/

Python TCP Send Buffer:
http://nullege.com/codes/search/socket.SO_SNDBUF

I hope that helps.

donald_xero
Explorer

Tried the TCP tweaking, but it doesn't help. Vista introduced "TCP auto tuning" which makes Windows' TCP performance much better, and the old tweaking parameters don't tend to help.

Setting a large receive window on the indexer made no difference. I've used a TCP proxy on the forwarder that has a large send buffer, and that will send to an unmodified and untuned indexer at >400kbytes/s -- it'd be tidier to set the TCP send buffer inside Splunk itself.

That Python proxy looks suspiciously slow (small buffer size, synchronous read/write calls) so I think I'll stick with my hacked up rinetd.

0 Karma

gkanapathy
Splunk Employee
Splunk Employee

There is not within Splunk. Splunk will use whatever TCP settings the OS provides. You maybe be able to indirectly affect latency by using a light forwarder as well, or with a different mechanism, by compressing the data.

0 Karma
Get Updates on the Splunk Community!

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...