I am currently running into problems with my Search Heads. Users are experiencing intermittent timeouts of splunkd, which is stated on by Splunkweb during search, log in, etc. When Splunk is started using the CLI I receive the message “timed out waiting for splunkd”, but splunkweb starts up immediately. Search Head configuration has not changed in months. Splunkd log does not report any errors and warnings when messages are presented by splunkweb.
During my investigation I noticed that Splunk was taking an exuberant amount of time to read its config file. This come to light when I started running processmon. My initial thought was ACL or network issues. After watching read/write actions with processmon it appeared that I had intermittent disk transfer (IOP) issues with my cif.
To confirm that I switch search head pooling to disabled. Once disabled Splunk web function as expected. My next step was to create an nfs share on a remote system and copying all pool configurations to the new nfs share. Splunkweb worked like a charm. I switch back and forth between local, windows nfs share, net app share, and the cif share. Not very scientific, but worked. To get my infrastructure teams to even look at the problem I need concrete numbers, so I ran ActiveIO tests against all four setups.
Results (spread sheet too big):
Local disk fast (as expected)
windows nfs (slower but acceptable)
Net App share (on par with windows nfs)
cif share was slower by a factor of 15 (unusable)
IO Zone command for full test (one thread):
iozone -a –b <drive_for_output>:\output.xls -f <drive_to_test>:\iozone.tmp –Q