There was an issue with our Splunk forwarders and it appears our application sent duplicate logs.
I am seeing a sudden spike of log count around a certain time.
Is there a way to know there was, in fact, duplicate logs?
What can I add to my search to find that
index="docker_index"
<== This is the search that i am using.
Hi balash1979, you can run the query below to check your data
Look for offset in the WathedFile component:
index=_internal sourcetype=splunkd component=watchedfile
- checksum fro seekptr didn't match, will re-read entire file
- file too small to check seekcrc
- will begin reading at offset=0 means a file is new(or rolled)
- seeing this twice in other conditions means it is not good
index"docker_index" sourcetype=xyz | convert ctime(_indextime) AS idxtime
| stats count dc(idxtime) as numIndexed, values(source), values(idxtime) by _raw
| where count > 1
great. This works. Is there a way I can use timechart and show between the 2 times (that I use for the search), when the count > 1 was higher or lower
try this:
| timechart count dc(idxtime) as numIndexed, values(source), values(idxtime) by _raw where count > 1
Hi balash1979, you can run the query below to check your data
Look for offset in the WathedFile component:
index=_internal sourcetype=splunkd component=watchedfile
- checksum fro seekptr didn't match, will re-read entire file
- file too small to check seekcrc
- will begin reading at offset=0 means a file is new(or rolled)
- seeing this twice in other conditions means it is not good
index"docker_index" sourcetype=xyz | convert ctime(_indextime) AS idxtime
| stats count dc(idxtime) as numIndexed, values(source), values(idxtime) by _raw
| where count > 1