I am frequently getting warning for Socket.
WARN HttpListener - Socket error from 127.0.0.1 while accessing /servicesNS/nobody//data/inputs/rest/*/: Broken pipe
source = /opt/splunk/var/log/splunk/splunkd.log sourcetype = splunkd
i have checked below answers, but unable to resolve my query:
https://answers.splunk.com/answers/105292/what-is-the-cause-of-these-socket-errors-reported-in-splun...
limits:
maxthread=0
maxsocket=0
INFO loader - Limiting REST HTTP server to 1365 threads
INFO loader - Limiting REST HTTP server to 1365 sockets
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 47488
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 8192
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
/etc/security/limits.conf
* hard nofile 64000
* hard nproc 8192
* hard fsize -1
Hi @sarvesh_11,
You're open files (-n)
limit is still at 1024. You should increase that for the Splunk user, and should also increase the soft limit.
Try something like this in your limits.conf file :
* hard nofile 64000
* soft nofile 64000
* hard nproc 8192
* soft nproc 8192
* hard fsize -1
* soft fsize -1
Or like this if that doesn't work :
https://docs.splunk.com/Documentation/Splunk/7.2.6/Troubleshooting/ulimitErrors#Set_limits_using_the...
Cheers,
David
Why was the above answer marked as accepted when on June 03 2019 @sarvesh_11 states the issue persists?
We're seeing this too. Was this issue resolved?
Hey @dijikul
We have upgraded splunk enterprise version and were able to solve this issue.
We upgraded it to 7.3
Hi @sarvesh_11,
You're open files (-n)
limit is still at 1024. You should increase that for the Splunk user, and should also increase the soft limit.
Try something like this in your limits.conf file :
* hard nofile 64000
* soft nofile 64000
* hard nproc 8192
* soft nproc 8192
* hard fsize -1
* soft fsize -1
Or like this if that doesn't work :
https://docs.splunk.com/Documentation/Splunk/7.2.6/Troubleshooting/ulimitErrors#Set_limits_using_the...
Cheers,
David
Hey @DavidHourani ,
Thanks for dropping by.
Later posting the question i soon realized that, and made the changes in limits.conf
@splunk hard nofile 64000
@splunk hard nproc 8192
@splunk hard fsize -1
@splunk soft nofile 64000
@splunk soft nproc 8192
@splunk soft fsize -1
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 47488
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 64000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 8192
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
still the issue persist.
Also, just while checking the app owner, i can see different users(root,splunk,), so shall i change the owner to splunk for all the apps?
you can use *
for now for the user in /etc/security/limits.conf
. And restart splunk after changing that setting and check the _internal logs for ulimit to double check that Splunk also reads the right ulimits. if you run the following search you should get the relevant lines :
index=_internal component=ulimit
@DavidHourani
Thanks much man!
The changes are replicated, i shall get back to you if still the "Break pipe" error is there
Thanks alot for your prompt response 🙂
@DavidHourani ,
Hey David, the error is still coming. THe changes of lmits are reflected, but still
Socket error from 127.0.0.1 while accessing /servicesNS/nobody//data/inputs/rest/http_connections_*/: Broken pipe.
Or i need to change maxSockets and maxThreads to negative integer?
currently it is 0
Any more remedies you have for this?
yeah try the maxSockets and maxThreads to -1 see if it helps, any other errors you're getting ?
Yeah, few more and related to that only:
"ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/rest_ta/bin/rest.py" HTTP Request error: 500 Server Error: Internal Server Error"
"ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_box/bin/box_service.py" InsecureRequestWarning)"
okay try out : maxSockets and maxThreads and lets see
@DavidHourani
Well for doing this and restarting services again, i have to take maintenance window from business.
Ideally that should not happen right? Setting to -1 will impact server's performance also.
yeah.... ideally you would need a reasonable limit to avoid breaking the server...are you using a single SH ?
yaa, standalone Search Head
Hey @DavidHourani ,
Modifying maxSockets and maxThread to -1, has not suffice my issue. I can still see the Broken Pipe messages.
Also, while browsing for such issue , found a link where people are struggling since long from such errors, strangely none of the Splunk Employee tried to resolve it.
FYR.. https://answers.splunk.com/answers/105292/what-is-the-cause-of-these-socket-errors-reported-in-splun...
Seems this is the loop hole in splunk older version, as SPL-82389 known issue has not been closed and addressed since 6.2.9 version.
Which Splunk version are you running ? Have you tried reaching out for Splunk support, you MIGHT be on a bug then...
We are currently on 6.6.3
Not yet, we are in plan to upgrade in near time.