Hi,
My problem is duplicated windows security logs. 2 or more log same as each other.
03/18/2019 10:53:50 AM
LogName=Security
SourceName=Microsoft Windows security auditing.
EventCode=4799
EventType=0
Type=Information
ComputerName=TestClient.kvp
TaskCategory=Security Group Management
OpCode=Info
RecordNumber=21040
Keywords=Audit Success
Message=A security-enabled local group membership was enumerated.
Subject:
Security ID: NT AUTHORITY\SYSTEM
Account Name: TestClient$
Account Domain: KVP
Logon ID: 0x3E7
Group:
Security ID: BUILTIN\Administrators
Group Name: Administrators
Group Domain: Builtin
Process Information:
Process ID: 0x46c
Process Name: C:\Windows\System32\VSSVC.exe
03/18/2019 10:53:50 AM
LogName=Security
SourceName=Microsoft Windows security auditing.
EventCode=4799
EventType=0
Type=Information
ComputerName=TestClient.kvp
TaskCategory=Security Group Management
OpCode=Info
RecordNumber=21040
Keywords=Audit Success
Message=A security-enabled local group membership was enumerated.
Subject:
Security ID: NT AUTHORITY\SYSTEM
Account Name: TestClient$
Account Domain: KVP
Logon ID: 0x3E7
Group:
Security ID: BUILTIN\Administrators
Group Name: Administrators
Group Domain: Builtin
Process Information:
Process ID: 0x46c
Process Name: C:\Windows\System32\VSSVC.exe
Forwarder's Operation :
- After the forwarder sends a data block, it maintains a copy of the data in its wait queue until it receives an acknowledgment.
- Meanwhile, it continues to send additional blocks.
- If the forwarder doesn't get acknowledgment for a block within **300 seconds** (by default), it closes the connection.
Possibility of Data Duplication :
- It is possible for the indexer to index the same data block twice.
- This can happen if there is a network problem that prevents an acknowledgment from reaching the forwarder.
Network Issues :
- When the network goes down, the forwarder never receives the acknowledgment.
- When the network comes back up, the forwarder then resends the data block, which the indexer parses and writes as if it were new data.
Duplicate Warning Example :
10-18-2010 17:32:36.941 WARN TcpOutputProc - Possible duplication of events with
channel=source::/home/jkerai/splunk/current-install/etc/apps/sample_app
/logs/maillog.1|host::MrT|sendmail|, streamId=5941229245963076846, offset=131072
subOffset=219 on host=10.1.42.2:9992
Note on useACK :
- When `useACK` is enabled in the `outputs.conf` on forwarders, and there is either a network issue, indexer saturation (for example, pipeline blocks) or a replication problem, your Splunk platform deployment's indexers cannot respond to your deployment's forwarders acknowledgement.
- Based on your deployment environment, data duplication can occur.
https://docs.splunk.com/Documentation/Forwarder/8.2.5/Forwarder/Protectagainstthelossofin-flightdata
useAck is possibly the cause then. (With reference to the comments above)
useAck ensures that you never 'loose' a message, but it can result in data duplication.
https://docs.splunk.com/Documentation/Forwarder/7.2.4/Forwarder/Protectagainstthelossofin-flightdata...
Is this forwarded with useAck = true set on the forwarders outputs.conf?
no this option is
useACK = false
should I change?
İf I change this option , Do I have a log loss?
if option is true , wait a 7mb logs, ı think is very long ?
No.
Have you checked on the source windows server to see if the actual event is duplicated in event viewer?
Can I find out what caused them by looking at the logs ?
Yes I check the win event viewer. and logs it just one,
try this:
[your search which finds duplicate events]|eval it=strftime(_indextime, "%Y-%m-%d %H:%M:%S.%N3"|table _time it host splunk_server Message
Look at the two rows for your duplicated events - is the index time the same, are they from the same splunk_server?
Sorry my wrong anser,
ı check today and useAck = true
and
this query result is
"_time",it,host,RecordNumber,"splunk_server"
"2019-03-19T07:56:29.000+0300","2019-03-19 07:56:30.0000000003","TestClient",3778048,a4idx07p SameMessage
"2019-03-19T07:56:29.000+0300","2019-03-19 07:56:55.0000000003","TestClient",3778048,a4idx06p SameMessage
This suggests that useAck is the problem - You can see both events are generated at exactly the same time, but they are indexed 25 seconds apart, by different indexers.
What likely happened is that the first message was received, but the indexer either did not ack the event in time (or it got lost on the network) so the forwarder resent it, resulting in the duplication.
This is by design - useAck means no messages should ever get 'lost' but it can result in duplication - this is the tradeoff.