Getting Data In

Duplicate records

jaterlwj
Explorer

I have tested and realized that when monitoring a file with let's say 24 rows with the option "Continuously index data from a file or directory this Splunk instance can access".
I noticed that when I add a new row and refreshes. There are now 49 rows. The older 24 records are being duplicated. Is there any option to stop duplicate rows?

Here are some specifics.
File format: .log
Specify the source:"Continuously index data from a file or directory this Splunk instance can access."

set host: constant value
set source type: manual
destination index:default

Tags (1)
0 Karma

jaterlwj
Explorer

Any help would be good!

0 Karma

jaterlwj
Explorer

Hi, Thanks for your reply.

Pardon me for my ignorance, but what should I look for under the _internal index? There's roughly 1.7m events in there. 😮

0 Karma

ak
Path Finder

check the _internal index. it appears the whole file is being reread, thus 24 + 25 rows.

0 Karma

ak
Path Finder

is your log file terminated with an end of file message, something like [END OF LOG FILE]?

if so, this will confuse splunk. splunk uses the last 256 bytes for CRC. If you have a termination message that is constantly appended to your file, the CRC check will fail. When this happens, splunk rereads the file, thus duplicating records.

See: splunk log rotation

0 Karma

jaterlwj
Explorer

Hi thank you for your reply! But the log file that I used does not contain any end of file message!

0 Karma
Get Updates on the Splunk Community!

AI for AppInspect

We’re excited to announce two new updates to AppInspect designed to save you time and make the app approval ...

App Platform's 2025 Year in Review: A Year of Innovation, Growth, and Community

As we step into 2026, it’s the perfect moment to reflect on what an extraordinary year 2025 was for the Splunk ...

Operationalizing Entity Risk Score with Enterprise Security 8.3+

Overview Enterprise Security 8.3 introduces a powerful new feature called “Entity Risk Scoring” (ERS) for ...