Monitoring Splunk

Can slow search head pool disk i/o lead to "Splunkd daemon is not responding" errors?

tsunamii
Path Finder

I have two search heads with search head pooling setup. The search head pool is on a NFS filer, and that has been working well for quite sometime. However, over the weekend, the NFS filer has failed. As a temporary solution, I have decided to elect one of the search head Linux host servers to also acts as NFS filer for the search head pool, knowing that it's probably not as fast (disk i/o wise) as the actual filer.

However, since after that change is made, my users are not able to login to splunk via splunkweb UI on both search heads. They kept getting "Splunkd daemon is not responding" message in the UI. After digging through the logs, I did find the following messages in the web_service.log:

2014-08-22 23:54:14,879 ERROR   [53f83a78bb1c5284d0] __init__:468 - Socket error communicating with splunkd (error=The read operation timed out), path = /servicesNS/galens/search/saved/searches
2014-08-22 23:54:14,879 ERROR   [53f83a78bb1c5284d0] decorators:379 - Splunkd daemon is not responding: ('Error connecting to /servicesNS/admin/search/saved/searches: The read operation timed out',)
Traceback (most recent call last):
  File "/opt/splunk/lib/python2.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 365, in handle_exceptions
    return fn(self, *a, **kw)
  File "", line 1, in 
  File "/opt/splunk/lib/python2.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 420, in apply_cache_headers
    response = fn(self, *a, **kw)
  File "/opt/splunk/lib/python2.7/site-packages/splunk/appserver/mrsparkle/controllers/view.py", line 1007, in render
    can_alert, searches = self.get_saved_searches(app)
  File "/opt/splunk/lib/python2.7/site-packages/splunk/appserver/mrsparkle/controllers/view.py", line 870, in get_saved_searches
    searches = en.getEntities('saved/searches', namespace=app, search='is_visible=1 AND disabled=0', count=500, _with_new='1')
  File "/opt/splunk/lib/python2.7/site-packages/splunk/entity.py", line 129, in getEntities
    atomFeed = _getEntitiesAtomFeed(entityPath, namespace, owner, search, count, offset, sort_key, sort_dir, sessionKey, uri, hostPath, **kwargs)
  File "/opt/splunk/lib/python2.7/site-packages/splunk/entity.py", line 222, in _getEntitiesAtomFeed
    serverResponse, serverContent = rest.simpleRequest(uri, getargs=kwargs, sessionKey=sessionKey, raiseAllErrors=True)
  File "/opt/splunk/lib/python2.7/site-packages/splunk/rest/__init__.py", line 469, in simpleRequest
    raise splunk.SplunkdConnectionException, 'Error connecting to %s: %s' % (path, str(e))
SplunkdConnectionException: Splunkd daemon is not responding: ('Error connecting to /servicesNS/admin/search/saved/searches: The read operation timed out',)

Could slow search head pool be the root cause? How can I temporary workaround this issue?

1 Solution

splunkIT
Splunk Employee
Splunk Employee

Slow search head pool i/o can potentially cause this problem, because the apps are residing on your NFS mount (SH Pool).

You can temporary workaround this, until your NFS filer is repaired, by setting web.conf parameter splunkdConnectionTimeout value to a higher value:

splunkdConnectionTimeout =
* Number of seconds to wait before timing out when communicating with splunkd
* Must be at least 30
* Values smaller than 30 will be ignored, resulting in the use of the default value
* Defaults to 30

http://docs.splunk.com/Documentation/Splunk/latest/Admin/Webconf

Note: This "splunkdConnectionTimeout" param is only available on splunk 6.x and above.

View solution in original post

tsunamii
Path Finder

Interestingly, if I hit the REST service in question, it took almost 1 minute before I got a response from splunkd:

curl -k https://localhost:8089/servicesNS/admin/search/saved/searches -d"username=admin&password=changeme" 
0 Karma

splunkIT
Splunk Employee
Splunk Employee

Slow search head pool i/o can potentially cause this problem, because the apps are residing on your NFS mount (SH Pool).

You can temporary workaround this, until your NFS filer is repaired, by setting web.conf parameter splunkdConnectionTimeout value to a higher value:

splunkdConnectionTimeout =
* Number of seconds to wait before timing out when communicating with splunkd
* Must be at least 30
* Values smaller than 30 will be ignored, resulting in the use of the default value
* Defaults to 30

http://docs.splunk.com/Documentation/Splunk/latest/Admin/Webconf

Note: This "splunkdConnectionTimeout" param is only available on splunk 6.x and above.

Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...