Getting Data In

How do we configure continuous collection and indexing of IIS logs from UNC?

ehudb
Contributor

I have an issue with IIS logs, being monitored by a Windows heavy forwarder through UNC path. When the forwarder service starts, the IIS logs start to collect, the logs are being indexed correctly. After a while, the logs stop being collected. It's only until a few hours later when they continue again when a new log file is created.

The reason we suspected was the IIS log files do not change its modtime, so we tried "alwaysOpenFile" property in inputs.conf, but it made it worse. The logs weren't indexed even after restart of the service.

The parsing queue in the forwarder is low on a regular basis. If we use the "alwaysOpenFile, it goes very high, above 90%. All the other queues in the indexer are looking fine (1 indexer only).

Any ideas?

Thanks, Ehud.

0 Karma
1 Solution

ehudb
Contributor

Thanks, it was eventually issued by large zip files that shouldn't by indexed anyway, and caused high queue
Eventually after removing these files, the logs were indexed correctly on time

View solution in original post

0 Karma

ehudb
Contributor

Thanks, it was eventually issued by large zip files that shouldn't by indexed anyway, and caused high queue
Eventually after removing these files, the logs were indexed correctly on time

View solution in original post

0 Karma

jkat54
SplunkTrust
SplunkTrust

Hey, how did you find the problem? Others might face this issue and they'll want to know how you diagnosed the issue.

Thanks!

0 Karma

ehudb
Contributor

I found it by seeing in _internal logs that there are unindexed zip files that are getting ignored
Fixed the issue by adding blacklist to the input, containing the zip files

0 Karma

jkat54
SplunkTrust
SplunkTrust

Also, what was the solution? Blacklisting options on the input?

0 Karma

bmacias84
Champion

Using a file mount to with a file monitor is inherently problematic. Problems will arise on connectivity issue, slow/unresponsive file share, or host server busy.

jkat54
SplunkTrust
SplunkTrust

I suspect you can find an error message by looking at index=_internal source=splunkd. Could be a socket time out, some limits.conf setting, etc.

Take the 2021 Splunk Career Survey

Help us learn about how Splunk has
impacted your career by taking the 2021 Splunk Career Survey.

Earn $50 in Amazon cash!