We have filtered out a large amount of firewall logs on heavy forwarder due to which we are receiving warning "WARN TailReader - Could not send data to output queue (parsingQueue), retrying...". Could anyone please help if we want to whitelist the data instead of blacklisting, so that this issue co
You need to provide more information, what you have blacklisted and how you have blacklisted ? What type of resources you have on Heavy Forwarder and Indexer ? How much data firewall is generating ?
We have blacklisted some DNS consuming around 30 GB of license and the actions such as timeout, accept, close consuming 70 GB of license. Before the license consumption by firewall logs was 130 GB and is now reduced to 30 GB. We are only taking two action into consideration.
DEST_KEY = queue
FORMAT = nullQueue
We have filtered out the data by sending it into null queue. We have 8 Core CPU on H.F, 16 GB RAM.
I checked for blocked=true
Metrics - group=queue, name=udp_queue, blocked=true, max_size_kb=500, current_size_kb=499, current_size=554, largest_size=588, smallest_size=0
Metrics - group=queue, name=parsingqueue, blocked=true, max_size_kb=25600, current_size_kb=25599, current_size=28678, largest_size=28678, smallest_size=23788
Metrics - group=queue, name=aggqueue, blocked=true, max_size_kb=25600, current_size_kb=25599, current_size=32915, largest_size=35086, smallest_size=28916
Metrics - group=queue, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=710, largest_size=751, smallest_size=0
So it look like due to parsingqueue and aggregationqueue block, it back pressure udp and splunk_tcpin queue. I would like to suggest that please configure LINE_BREAKER, SHOULD_LINEMERGE, TIME_FORMAT TIME_PREFIX and MAX_TIMESTAMP_LOOKAHEAD parameter for all sources which are ingesting massive amount of data on HF.
In addition I'll suggest if you can write REGEX with least steps matching then REGEX engine will perform those REGEX faster.