Monitoring Splunk

HttpListener - Socket error from 127.0.0.1 while accessing : Broken Pipe

Path Finder

I am frequently getting warning for Socket.

WARN HttpListener - Socket error from 127.0.0.1 while accessing /servicesNS/nobody//data/inputs/rest/*/: Broken pipe
source = /opt/splunk/var/log/splunk/splunkd.log sourcetype = splunkd

i have checked below answers, but unable to resolve my query:
https://answers.splunk.com/answers/105292/what-is-the-cause-of-these-socket-errors-reported-in-splun...

limits:
maxthread=0
maxsocket=0

INFO loader - Limiting REST HTTP server to 1365 threads
INFO loader - Limiting REST HTTP server to 1365 sockets

ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 47488
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 8192
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

/etc/security/limits.conf
* hard nofile 64000
* hard nproc 8192
* hard fsize -1

Labels (2)
0 Karma
1 Solution

SplunkTrust
SplunkTrust

Hi @sarvesh_11,

You're open files (-n)limit is still at 1024. You should increase that for the Splunk user, and should also increase the soft limit.

Try something like this in your limits.conf file :

*    hard    nofile     64000
*    soft    nofile     64000
*    hard    nproc     8192
*    soft    nproc     8192
*    hard    fsize      -1  
*    soft    fsize      -1  

Or like this if that doesn't work :
https://docs.splunk.com/Documentation/Splunk/7.2.6/Troubleshooting/ulimitErrors#Set_limits_using_the...

Cheers,
David

View solution in original post

Communicator

Why was the above answer marked as accepted when on June 03 2019 @sarvesh_11 states the issue persists?

We're seeing this too. Was this issue resolved?

0 Karma

Path Finder

Hey @dijikul
We have upgraded splunk enterprise version and were able to solve this issue.
We upgraded it to 7.3

0 Karma

SplunkTrust
SplunkTrust

Hi @sarvesh_11,

You're open files (-n)limit is still at 1024. You should increase that for the Splunk user, and should also increase the soft limit.

Try something like this in your limits.conf file :

*    hard    nofile     64000
*    soft    nofile     64000
*    hard    nproc     8192
*    soft    nproc     8192
*    hard    fsize      -1  
*    soft    fsize      -1  

Or like this if that doesn't work :
https://docs.splunk.com/Documentation/Splunk/7.2.6/Troubleshooting/ulimitErrors#Set_limits_using_the...

Cheers,
David

View solution in original post

Path Finder

Hey @DavidHourani ,
Thanks for dropping by.
Later posting the question i soon realized that, and made the changes in limits.conf

@splunk hard nofile 64000
@splunk hard nproc 8192
@splunk hard fsize -1
@splunk soft nofile 64000
@splunk soft nproc 8192
@splunk soft fsize -1

ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 47488
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 64000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 8192
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

still the issue persist.
Also, just while checking the app owner, i can see different users(root,splunk,), so shall i change the owner to splunk for all the apps?

0 Karma

SplunkTrust
SplunkTrust

you can use * for now for the user in /etc/security/limits.conf. And restart splunk after changing that setting and check the _internal logs for ulimit to double check that Splunk also reads the right ulimits. if you run the following search you should get the relevant lines :

index=_internal  component=ulimit
0 Karma

Path Finder

@DavidHourani
Thanks much man!
The changes are replicated, i shall get back to you if still the "Break pipe" error is there

Thanks alot for your prompt response 🙂

0 Karma

Path Finder

@DavidHourani ,
Hey David, the error is still coming. THe changes of lmits are reflected, but still

Socket error from 127.0.0.1 while accessing /servicesNS/nobody//data/inputs/rest/httpconnections*/: Broken pipe.

Or i need to change maxSockets and maxThreads to negative integer?
currently it is 0
Any more remedies you have for this?

0 Karma

SplunkTrust
SplunkTrust

yeah try the maxSockets and maxThreads to -1 see if it helps, any other errors you're getting ?

0 Karma

Path Finder

Yeah, few more and related to that only:

"ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/rest_ta/bin/rest.py" HTTP Request error: 500 Server Error: Internal Server Error"

"ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/SplunkTAbox/bin/box_service.py" InsecureRequestWarning)"

0 Karma

SplunkTrust
SplunkTrust

okay try out : maxSockets and maxThreads and lets see

0 Karma

Path Finder

@DavidHourani
Well for doing this and restarting services again, i have to take maintenance window from business.
Ideally that should not happen right? Setting to -1 will impact server's performance also.

0 Karma

SplunkTrust
SplunkTrust

yeah.... ideally you would need a reasonable limit to avoid breaking the server...are you using a single SH ?

0 Karma

Path Finder

yaa, standalone Search Head

0 Karma

Path Finder

Hey @DavidHourani ,
Modifying maxSockets and maxThread to -1, has not suffice my issue. I can still see the Broken Pipe messages.

Also, while browsing for such issue , found a link where people are struggling since long from such errors, strangely none of the Splunk Employee tried to resolve it.
FYR.. https://answers.splunk.com/answers/105292/what-is-the-cause-of-these-socket-errors-reported-in-splun...

Seems this is the loop hole in splunk older version, as SPL-82389 known issue has not been closed and addressed since 6.2.9 version.

0 Karma

SplunkTrust
SplunkTrust

Which Splunk version are you running ? Have you tried reaching out for Splunk support, you MIGHT be on a bug then...

0 Karma

Path Finder

We are currently on 6.6.3
Not yet, we are in plan to upgrade in near time.

0 Karma