All Apps and Add-ons

WARN TailReader - Could not send data to output queue (parsingQueue), retrying...

meghasinghal
Engager

We have filtered out a large amount of firewall logs on heavy forwarder due to which we are receiving warning "WARN TailReader - Could not send data to output queue (parsingQueue), retrying...". Could anyone please help if we want to whitelist the data instead of blacklisting, so that this issue co

Labels (2)
0 Karma

harsmarvania57
Ultra Champion

Hi,

You need to provide more information, what you have blacklisted and how you have blacklisted ? What type of resources you have on Heavy Forwarder and Indexer ? How much data firewall is generating ?

0 Karma

meghasinghal
Engager

We have blacklisted some DNS consuming around 30 GB of license and the actions such as timeout, accept, close consuming 70 GB of license. Before the license consumption by firewall logs was 130 GB and is now reduced to 30 GB. We are only taking two action into consideration.

[transforms]
REGEX = 
DEST_KEY = queue
FORMAT = nullQueue

We have filtered out the data by sending it into null queue. We have 8 Core CPU on H.F, 16 GB RAM.

0 Karma

harsmarvania57
Ultra Champion

Can you please search blocked=true in $SPLUNK_HOME/var/log/splunk/metrics.log on HF ? Also can you please provide REGEX which you are using (mask any sensitive data).

 

0 Karma

meghasinghal
Engager

I checked for blocked=true

Metrics - group=queue, name=udp_queue, blocked=true, max_size_kb=500, current_size_kb=499, current_size=554, largest_size=588, smallest_size=0
Metrics - group=queue, name=parsingqueue, blocked=true, max_size_kb=25600, current_size_kb=25599, current_size=28678, largest_size=28678, smallest_size=23788
Metrics - group=queue, name=aggqueue, blocked=true, max_size_kb=25600, current_size_kb=25599, current_size=32915, largest_size=35086, smallest_size=28916
Metrics - group=queue, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=710, largest_size=751, smallest_size=0

 

Regex=(?m).*(server).*device(p|q|r|s|t|u|v|w|x|.......).*vdom.*(a|b|c|d|e|f|g|h|.......).*type.*(traffic).*action.*(accept|client-rst|close|dns|ip-conn|server-rst|timeout).*

0 Karma

harsmarvania57
Ultra Champion

So it look like due to parsingqueue and aggregationqueue block, it back pressure udp and splunk_tcpin queue. I would like to suggest that please configure LINE_BREAKER, SHOULD_LINEMERGE, TIME_FORMAT TIME_PREFIX and MAX_TIMESTAMP_LOOKAHEAD parameter for all sources which are ingesting massive amount of data on HF.

In addition I'll suggest if you can write REGEX with least steps matching then REGEX engine will perform those REGEX faster.

Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...