OS: Windows Server 2008 R2 Enterprise
Splunk Universal Forwarder version: 6.2.6 (build 274160)
Hi,
Good Day. Would like to seek for an assistance, resolution on my issue. Here's the case:
I have 5 universal forwarder and an app config in a server class and have this stanza in my inputs.conf
[WinEventLog://Security]
disabled = 0
start_from = oldest
current_only = 0
evt_resolve_ad_obj = 1
checkpointInterval = 5
blacklist1 = EventCode="4662" Message="Object Type:\s+(?!groupPolicyContainer)"
blacklist2 = EventCode="566" Message="Object Type:\s+(?!groupPolicyContainer)"
index = dhcp_winevt
renderXml=false
###### DHCP ######
[monitor://C:\Windows\System32\dhcp\DhcpSrv*]
disabled = 0
sourcetype = dhcp_server_logs
index = dhcp_index
## connection_host = none
Indexing of the logs is fine on the first and second months, then, eventually 2 of 5 universal forwarder has stopped forwarding the DHCP logs as seen on the inputs.conf
stanza, but still forwarding the Security logs, so, we then check the logs on the server side but DHCP log is still actively logging. What seems to be the problem here? Thanks in advance.
Check _internal splunkd logs for TailingProcessor errors:
index=_internal sourcetype=splunkd component=TailingProcessor
I see problems like this most often when log files have a header on them. Splunk IDs files with a CRC of the first 256 bytes of a file. If that's the same for every new roll of the file, Splunk says "hey, I've already seen this file" and will permanently skip it. You can change the initCrcLength value in the input definition to a larger value to ensure you capture unique data.
[monitor://C:\Windows\System32\dhcp\DhcpSrv*]
disabled = 0
sourcetype = dhcp_server_logs
index = dhcp_index
initCrcLength = 1024
NOTE! If you change this value, all files that match the stanza will get a new ID and splunk will re-index them.
Is the Forwarder via outputs.conf connect to the indexer?
In case Yes, check the internal log for this forwarder and the log level "error".
Is the folder changed?
The folder does not change, I've checked internal logs and found out cooked connection and raw connection error towards our Heavy Forwarders even though connection is allowed through firewall. What seems to be the problem here?
any errors in the client universal forwarder logs?
We had this issue whereby the client's administrator blocked access (i.e permission issues)