Security

Nessus vulnerability scan causes splunkweb to shut down with "too many open files" error

jeff
Contributor

Had myself a little denial of service today. Ran a Nessus scan for the first time on our main Splunk indexer/web interface. The scan caused Splunkweb to shut down...

2010-05-13 15:52:16,301 ERROR  [4be85e4ab8125c290] root:120 - ENGINE: Error in HTTP server: shutting down
Traceback (most recent call last):
  File "/splunk/app/splunk/lib/python2.6/site-packages/cherrypy/process/servers.py", line 73, in _start_http_thread
  File "/splunk/app/splunk/lib/python2.6/site-packages/cherrypy/wsgiserver/__init__.py", line 1662, in start self.tick()
  File "/splunk/app/splunk/lib/python2.6/site-packages/cherrypy/wsgiserver/__init__.py", line 1717, in ticks, addr = self.socket.accept()
  File "/splunk/app/splunk/lib/python2.6/ssl.py", line 317, in accept newsock, addr = socket.accept(self)
  File "/splunk/app/splunk/lib/python2.6/socket.py", line 195, in accept error: [Errno 24] Too many open files

Something I should tune with Nessus to scale back the requests?

2 Solutions

jrodman
Splunk Employee
Splunk Employee

Solaris has an nofiles ulimit of 256 by default. Other unixes have larger defaults.

There's a variety of possible responses we could have to this sort of thing:

  • throttle very active users
  • refuse to allow more than N concurrent accesses
  • scale either of the above in proportion to the nofiles ulimit

Does this cure seem better than the poison? It introduces new problems.

View solution in original post

0 Karma

bfaber
Communicator

It looks like this may have been fixed in 4.1.4. Have you tried it?

View solution in original post

0 Karma

bfaber
Communicator

It looks like this may have been fixed in 4.1.4. Have you tried it?

0 Karma

jeff
Contributor

confirmed that it is fixed in 4.1.5 (skipped 4.1.4)

jrodman
Splunk Employee
Splunk Employee

Solaris has an nofiles ulimit of 256 by default. Other unixes have larger defaults.

There's a variety of possible responses we could have to this sort of thing:

  • throttle very active users
  • refuse to allow more than N concurrent accesses
  • scale either of the above in proportion to the nofiles ulimit

Does this cure seem better than the poison? It introduces new problems.

0 Karma

bfaber
Communicator

I run nessus daily on the splunk server without any issues. Perhaps you have a very aggressive profile? Can you share that?

0 Karma

jeff
Contributor

Yeah, looks like it may be a Solaris x64 issue. I have a Splunk engineer researching for me (my Solaris-fu is not strong).

0 Karma

bfaber
Communicator

Could this be OS specific?

0 Karma

jeff
Contributor

My scan was really nothing special- one that I've run on dozens of other servers. Safe checks, moderate simultaneous threads, etc. I'm going to give it another go later today to see what happens.

0 Karma
Get Updates on the Splunk Community!

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...

Built-in Service Level Objectives Management to Bridge the Gap Between Service & ...

Wednesday, May 29, 2024  |  11AM PST / 2PM ESTRegister now and join us to learn more about how you can ...

Get Your Exclusive Splunk Certified Cybersecurity Defense Engineer Certification at ...

We’re excited to announce a new Splunk certification exam being released at .conf24! If you’re headed to Vegas ...