We recently completed a SOW with Splunk Professional Services. As part of the SOW we cleaned up apps, scripts etc. and made Splunk much neater and up to date. Previously we had scripts that would filter the IIS log info we wanted. We collect IIS logs specifically for CAS servers so we know what devices users are getting locked out of.
To replace the scripts, we installed the IIS app from Splunk Base. Everything is great except for one specific CAS server. I'm noticing the logs at this path: C:\inetpub\logs\LogFiles\W3SVC1 (we are an all Windows environment here) are significantly larger than other CAS servers (example: one server has a log file that is about 47,000KB every 24 hours while the problem server is looking at 150,000+KB). The server was also transferring at 60+KBps which is way higher than other servers that report to Splunk. At this point, the Splunk service has throttled the queue because the pool has gotten so large.
One thing I did notice is that a specific user is generating literally 33% of all events that are reported on the problematic CAS server. The logs are simply allowed and blocked EWS syncs and ActiveSync connections. He is running the latest OS X 10.13.4 and iOS 11.3.1 and no other user is running these OS's.
We do not have any other monitoring going on except for that log path above. No other servers are generating log files this large, but we do have 2 other CAS servers that are sending at a heavier index rate to the indexer (these servers are NOT generating a large log file, but bandwidth is around 25 to 30KBps while others hover around 10 to 15KBps).
Has anyone ever run into a similar issue like this or may have an idea of what may be going on? Or has anyone been noticing large log files recently since the latest Apple OS updates and may be causing license usage problems in Splunk?
Any help will be appreciated since the tech and I are kind of stumped on this one. Thank you!