so the case goes as such ,
I am only able to push btw 55-60EPS(Events per seconds) into an index via TCP port "5000"
During load test events as high as 120 > Events/secs are generated then pushed into single instance of splunk server(No clusters) in real-time. fortunately Splunk server is able to receive the volumes of events between 55-60 EPS without hassle and the time to "open tcp" connection "send event" and "Close connection" is observed to be <300-400 millisecond, the unfortunate observation here is when the EPS is above 60EPS there is drastic increase in response time to receive these events upto 14 seconds thus limiting the to EPS a splunk server at the TCP port to handle only 55-60EPS.
Well in assumption the the local port connection are exhausted i have tried but was unsuccessful.
1. decreased TCP Keep alive to 60 from 7200 sudo sysctl -w net.ipv4.tcp_keepalive_time=60
2. increased ports using : sudo sysctl -w net.ipv4.ip_local_port_range="1024 65535"
Configuration of the splunk server
Hardware 16 core 64 GB
Licence type: enterprise.
Utilization during 60 EPS was < 20 %
Is there any configuration that i can alter and where to ensure the splunk server could scale and cater more than 60 EPS via the tcp port ??
do revert if you need any further clarification, your response to resolving my concern is gravely appreciated .
... View more