I just found some interesting entries in splunkd.log and metrics.log of several UF's. These are, for example:
splunkd.log
03-11-2020 09:22:28.044 +0000 INFO Metrics - group=queue, name=parsingqueue, blocked=true, max_size_kb=512, current_size_kb=511, current_size=1336, largest_size=1336, smallest_size=0
metrics.log
03-11-2020 09:22:32.247 +0000 WARN TailReader - Could not send data to output queue (parsingQueue), retrying...
03-11-2020 09:30:30.390 +0000 WARN TailReader - Could not send data to output queue (structuredParsingQueue), retrying...
These events are recorded quite often for several UF's but I can not say that they correlate well with previously described Exchange Server Message Tracking log files lines not being indexed. After some reading I increased parsingQueue size to 10MB for my UF's, significantly reduced parsingQueue fill ratio:
server.conf
[queue=parsingQueue]
maxSize = 10MB
Did not make any changes to structuredParsingQueue maxSize yet. It seems I should change one setting in a time and wait 🙂 I observe the coming Message Tracking events, no problems for now.
That's what bothers me - if blocked UF's queues are really the initial problem is it possible, that some of source events do not fall into parsingQueue and pass to Heavy Forwarder server without INDEXED_EXTRACTIONS being applied (raw, not cooked)? As I do not have INDEXED_EXTRACTIONS configuration for my_exchange_logs_message_tracking sourcetype anywhere except UF's in this case these events definitely reach Indexer servers without fields extracted. If so then in case of growing mail flow we can face the problem again. Should I maybe place my INDEXED_EXTRACTIONS configuration also to Heavy Forwarder and Indexer servers?
Or, if the queue is blocked, do original events wait in queue (what queue?) and get delayed on UF, but do pass the parsingQueue anyway?
... View more