Getting Data In

Events being indexed multiple times to different servers

chodgens
Engager

I'm looking for possible reasons a single event would be indexed numerous times on our main indexers from our heavy forwarders. We have ignoreOlderThan set to 1d, but from the looks of it the file watcher is indexing events multiple times within a 10-20 minute period, around an hour after the event occurs, both before and after reboot. I've verified that the source file only contains 1 of each event, but our forwarders seem to be pushing multiple copies to the indexer and are showing up as duplicates in our counts. The servers that index the data are showing as different between the index times

Example of issue:
_time value = 2019-12-08 11:31:17.116
index timestamps/servers indexed =
indexer 3 -12/08/2019 04:27:51, 12/08/2019 05:36:19
indexer 8 -12/08/2019 06:39:04, 12/09/2019 05:40:10, 12/09/2019 06:47:59
indexer 9 -12/08/2019 07:45:06, 12/09/2019 07:34:19
indexer 10-12/08/2019 08:24:10, 12/09/2019 03:55:23

inputs.conf segment -
[monitor:///var/log/]

index = indexers
sourcetype = sourcetype
ignoreOlderThan = 1d

outputs.conf segment -
[tcpout]

Turn off indexing on the local machine, we want the items indexed at the main indexers.

indexAndForward = false

Define which group of indexers we are sending to. We currently only have one.

defaultGroup = primary_indexers
maxQueueSize = 250MB

[tcpout:primary_indexers]

server = servers
autoLB = true
forceTimebasedAutoLB = true

0 Karma

Richfez
SplunkTrust
SplunkTrust

Are those indexed times the times of that particular event being indexed? Why the different times? (Or are they actually just "reindexed examples" and not examples of that particular event being reindexed?)

Also can you confirm that the file absolutely doesn't change in the first few hundred characters? Because my first real guess is that something rewrites a header line in that file, and thus Splunk thinks it's a new file.

Is it just a typo that inputs.conf says index=indexers, but the table of values has index=indexer?

Lastly, these appear to be heavy forwarders - why not UF? (And if I'm wrong there, no worries).

Your theory is probably right, but why? That's the question. I've seen this happen on high-load boxes, too, when there's too many files to monitor properly in one stanza. have you thought to break up the one big stanza into a lot of little ones? How many files are being tailed at any one time?

-Rich

chodgens
Engager

ADDITIONAL INFORMATION:

We're seeing a time frame correlation between "WatchedFile - Will begin reading at offset=..." and the time frames involved in the duplicate indexing.

My running theory is that the WatchedFile component is starting at an incorrect file offset, resulting in re-indexing of all events between assigned offset time and the time the WatchedFile component actually started reading. "IgnoreOlderThan" flag in inputs.conf may be playing a part if buckets are cleared on shutoff, the offset wouldn't matter and would re-index entire file daily.

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...