Deployment Architecture

queue blocks on light forwarder

emiller42
Motivator

Ran into something today that I can't seem to find much information on. I've got a single light forwarder instance that has stopped forwarding data. Checking metrics.log shows the following queueus are blocked:

07-06-2012 09:24:23.412 -0500 INFO  Metrics - group=queue, name=aggqueue, blocked=true, max_size_kb=1024, current_size_kb=1023, current_size=2635, largest_size=2635, smallest_size=2635

07-06-2012 09:24:23.412 -0500 INFO  Metrics - group=queue, name=indexqueue, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1283, largest_size=1283, smallest_size=1283

07-06-2012 09:24:23.412 -0500 INFO  Metrics - group=queue, name=typingqueue, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1235, largest_size=1235, smallest_size=1235

I have several other identical forwarders that are not seeing these issues, and the indexer is showing no signs of strain. There aren't any connectivity issues between the forwarder and the indexer, nor any recent config changes. All the info I've found here regarding queue blockages at the indexer, so I get the impression this isn't a common issue.

Any ideas on what to look for here? I'm at a loss as to why this particular instance is failing.

Thanks!

1 Solution

emiller42
Motivator

I solved this by uninstalling and reinstalling the light forwarder. This didn't really shed any light on what the issue was, but it is functional now.

View solution in original post

emiller42
Motivator

I solved this by uninstalling and reinstalling the light forwarder. This didn't really shed any light on what the issue was, but it is functional now.

dwaddle
SplunkTrust
SplunkTrust

I might start by checking the thruput settings for that forwarder. Light and Universal forwarders have a bandwidth throttle between them and the indexers. The relevant setting is in limits.conf.

http://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf

0 Karma

emiller42
Motivator

Doesn't look like that's the issue. That appears to limit the rate of event processing on the indexer side of things. Even if it could be a cause, it's defaulted to no limit.

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Splunkbase Unveils New App Listing Management Public Preview

Splunkbase Unveils New App Listing Management Public PreviewWe're thrilled to announce the public preview of ...

Leveraging Automated Threat Analysis Across the Splunk Ecosystem

Are you leveraging automation to its fullest potential in your threat detection strategy?Our upcoming Security ...

Can’t Make It to Boston? Stream .conf25 and Learn with Haya Husain

Boston may be buzzing this September with Splunk University and .conf25, but you don’t have to pack a bag to ...