Getting Data In

Lost file monitor on windows directory

ggfloresca
Explorer

We lost file monitor on multiple path on 2 specific windows servers, we're monitoring a very chatting high volume logs, it was working for a little bit, now it stopped ingesting the logs and we are getting the following error, warning and info in the log_level, need some help to troubleshoot on this, and isolating the bottleneck..... Thank  

splunkd log:

01-17-2021 12:54:31.379 -0800 ERROR TailReader - Was unable to open file: D:\####\####\####\#####.log.
01-17-2021 18:10:07.686 -0800 WARN TailReader - Insufficient permissions to read file='D:\####\####\####\####.log' (hint: The system cannot find the file specified.)
01-18-2021 04:38:42.109 -0800 WARN TailReader - Could not send data to output queue (parsingQueue), retrying...
01-18-2021 04:38:42.109 -0800 WARN TailReader - Could not send data to output queue (parsingQueue), retrying...
01-19-2021 23:29:55.771 -0800 INFO PeriodicHealthReporter - feature="TailReader-0" color=red indicator="data_out_rate" due_to_threshold_value=2 measured_value=2 reason="The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data." node_type=indicator node_path=splunkd.file_monitor_input.tailreader-0.data_out_rate

0 Karma

scelikok
SplunkTrust
SplunkTrust

Hi @ggfloresca,

According to tags, it seems you are running Heavy Forwarder as an intermediate forwarder. 

WARN TailReader - Could not send data to output queue (parsingQueue) log show that forwarder cannot send data to indexers, the reason maybe indexers cannot handle the traffic, that is why forwarder parsing queues are full. Or if Indexers are on remote location, bandwidth may not be enough.

WARN TailReader - Insufficient permissions to read file='D:\####\####\####\####.log' (hint: The system cannot find the file specified.) log shows that the file does not exist. This seems related to above problem, while forwarder trying to send logs to indexers, file could have been rotated or deleted by logger application.

You should see why indexers canno

If this reply helps you an upvote and "Accept as Solution" is appreciated.
0 Karma

ggfloresca
Explorer

Thank you both for replying.

scelikok, I think portion of your reply got truncated, anyways after investigating further I found a big burst of data, i meant "HUGE" on one of our server, now I'm trying to figure out, how and where to get the queue clear up, please advise --- UF --> IF --> idx.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @ggfloresca,

 did you checked the read permissions on those files?

there's the message "Insufficient permissions to read" in the logs, it seems that the user used to run Splunk hasn't the grants to read those files.

Ciao.

Giuseppe

0 Karma

ggfloresca
Explorer

Still figuring out to clear the blocking in the queue, here's what I found in the splunkd.log, so far, now trying to figure out how to get the blocking cleared safely, any advice or recommendation is highly appreciated:

01-20-2021 19:00:56.435 +0000 INFO Metrics - group=pipeline, ingest_pipe=1, name=parsing, processor=sendout, cpu_seconds=1011.880, executes=3061051, cumulative_hits=2768837847

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...