Monitoring Splunk

splunkweb reporting splunkd timeout

bmacias84
Champion

Hello,

I am currently running into problems with my Search Heads. Users are experiencing intermittent timeouts of splunkd, which is stated on by Splunkweb during search, log in, etc. When Splunk is started using the CLI I receive the message “timed out waiting for splunkd”, but splunkweb starts up immediately. Search Head configuration has not changed in months. Splunkd log does not report any errors and warnings when messages are presented by splunkweb.

Configurations:

  • Search Head pooling
  • Splunk 5.0.5
  • Windows Server Standard 2008R2
  • Virtualized

Suggestions?

0 Karma
1 Solution

bmacias84
Champion

During my investigation I noticed that Splunk was taking an exuberant amount of time to read its config file. This come to light when I started running processmon. My initial thought was ACL or network issues. After watching read/write actions with processmon it appeared that I had intermittent disk transfer (IOP) issues with my cif.

To confirm that I switch search head pooling to disabled. Once disabled Splunk web function as expected. My next step was to create an nfs share on a remote system and copying all pool configurations to the new nfs share. Splunkweb worked like a charm. I switch back and forth between local, windows nfs share, net app share, and the cif share. Not very scientific, but worked. To get my infrastructure teams to even look at the problem I need concrete numbers, so I ran ActiveIO tests against all four setups.

Results (spread sheet too big):

  1. Local disk fast (as expected)
  2. windows nfs (slower but acceptable)
  3. Net App share (on par with windows nfs)
  4. cif share was slower by a factor of 15 (unusable)

IO Zone command for full test (one thread):

  • iozone -a –b <drive_for_output>:\output.xls -f <drive_to_test>:\iozone.tmp –Q

IO Zone command for quick test (four threads):

  • iozone -i 0 -t 4 -F <drive_to_test>:\iozone.tmp <drive_to_test>:\1.tmp <drive_to_test>:\2.tmp <drive_to_test>:\3.tmp -Q

Another Helpful doc was pooling configuration issues. For those of you who looked at this thanks.

View solution in original post

bmacias84
Champion

During my investigation I noticed that Splunk was taking an exuberant amount of time to read its config file. This come to light when I started running processmon. My initial thought was ACL or network issues. After watching read/write actions with processmon it appeared that I had intermittent disk transfer (IOP) issues with my cif.

To confirm that I switch search head pooling to disabled. Once disabled Splunk web function as expected. My next step was to create an nfs share on a remote system and copying all pool configurations to the new nfs share. Splunkweb worked like a charm. I switch back and forth between local, windows nfs share, net app share, and the cif share. Not very scientific, but worked. To get my infrastructure teams to even look at the problem I need concrete numbers, so I ran ActiveIO tests against all four setups.

Results (spread sheet too big):

  1. Local disk fast (as expected)
  2. windows nfs (slower but acceptable)
  3. Net App share (on par with windows nfs)
  4. cif share was slower by a factor of 15 (unusable)

IO Zone command for full test (one thread):

  • iozone -a –b <drive_for_output>:\output.xls -f <drive_to_test>:\iozone.tmp –Q

IO Zone command for quick test (four threads):

  • iozone -i 0 -t 4 -F <drive_to_test>:\iozone.tmp <drive_to_test>:\1.tmp <drive_to_test>:\2.tmp <drive_to_test>:\3.tmp -Q

Another Helpful doc was pooling configuration issues. For those of you who looked at this thanks.

Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...