Archive
Highlighted

splunk forwarders using too many sockets

New Member

I have the following config in outputs.conf for splunk forwarder installed on a linux machine.

connectionTimeout = 20
defaultGroup = default-autolb-group
dropEventsOnQueueFull = -1
indexAndForward = false
maxConnectionsPerIndexer = 2
maxFailuresPerInterval = 2
maxQueueSize = 500KB
readTimeout = 300
secsInFailureInterval = 1
useACK = false
writeTimeout = 300

[tcpout:default-autolb-group]
autoLB = true
autoLBFrequency = 30
compressed = false

The forwarder is sending some historical logs too of past few months. As soon as splunk is started lot of processes on that machine cannot process due to lack of open ports as forwarder is using a lot of sockets i guess.
Is there anyway to limit the number of sockets it use?

Tags (2)
0 Karma
Highlighted

Re: splunk forwarders using too many sockets

Engager

Check for any unused ports where initially data was configured to be received but later on stopped for some reason
you may remove those unused port using "splunk remove udp (or tcp) port#>

bit of housekeeping stuff might just help

0 Karma
Highlighted

Re: splunk forwarders using too many sockets

New Member

My indexer ports are receiving data. It looks fine. The problem is in the forwarder machine which is exhausting socket availability. No other ports were initially set

0 Karma