Security

Nessus vulnerability scan causes splunkweb to shut down with "too many open files" error

jeff
Contributor

Had myself a little denial of service today. Ran a Nessus scan for the first time on our main Splunk indexer/web interface. The scan caused Splunkweb to shut down...

2010-05-13 15:52:16,301 ERROR  [4be85e4ab8125c290] root:120 - ENGINE: Error in HTTP server: shutting down
Traceback (most recent call last):
  File "/splunk/app/splunk/lib/python2.6/site-packages/cherrypy/process/servers.py", line 73, in _start_http_thread
  File "/splunk/app/splunk/lib/python2.6/site-packages/cherrypy/wsgiserver/__init__.py", line 1662, in start self.tick()
  File "/splunk/app/splunk/lib/python2.6/site-packages/cherrypy/wsgiserver/__init__.py", line 1717, in ticks, addr = self.socket.accept()
  File "/splunk/app/splunk/lib/python2.6/ssl.py", line 317, in accept newsock, addr = socket.accept(self)
  File "/splunk/app/splunk/lib/python2.6/socket.py", line 195, in accept error: [Errno 24] Too many open files

Something I should tune with Nessus to scale back the requests?

2 Solutions

jrodman
Splunk Employee
Splunk Employee

Solaris has an nofiles ulimit of 256 by default. Other unixes have larger defaults.

There's a variety of possible responses we could have to this sort of thing:

  • throttle very active users
  • refuse to allow more than N concurrent accesses
  • scale either of the above in proportion to the nofiles ulimit

Does this cure seem better than the poison? It introduces new problems.

View solution in original post

0 Karma

bfaber
Communicator

It looks like this may have been fixed in 4.1.4. Have you tried it?

View solution in original post

0 Karma

bfaber
Communicator

It looks like this may have been fixed in 4.1.4. Have you tried it?

0 Karma

jeff
Contributor

confirmed that it is fixed in 4.1.5 (skipped 4.1.4)

jrodman
Splunk Employee
Splunk Employee

Solaris has an nofiles ulimit of 256 by default. Other unixes have larger defaults.

There's a variety of possible responses we could have to this sort of thing:

  • throttle very active users
  • refuse to allow more than N concurrent accesses
  • scale either of the above in proportion to the nofiles ulimit

Does this cure seem better than the poison? It introduces new problems.

0 Karma

bfaber
Communicator

I run nessus daily on the splunk server without any issues. Perhaps you have a very aggressive profile? Can you share that?

0 Karma

jeff
Contributor

Yeah, looks like it may be a Solaris x64 issue. I have a Splunk engineer researching for me (my Solaris-fu is not strong).

0 Karma

bfaber
Communicator

Could this be OS specific?

0 Karma

jeff
Contributor

My scan was really nothing special- one that I've run on dozens of other servers. Safe checks, moderate simultaneous threads, etc. I'm going to give it another go later today to see what happens.

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...