Getting Data In

Ignore duplicate events while indexing for circular log

sanjibdhar
Engager

Hi, I have installed a splunk server in one system and one universal forwarder in another system. I am monitoring a circular log(when the log reaches a particular size the log entries from the bottom gets deleted and new entries comes at the top of the file) with universal forwarder.

The problem i am facing is that when any new entry comes at the top of the file the splunk parses the whole log file and creates duplicate events in the index.

Is there any way so that i can tell splunk to ignore a event if it is already present in the index or to overwrite a event if it is already in the index and only store the new entry in the index.

I am working on it for last 2 days but no solution till now, any help is appreciated.

Thanx,

SD

Tags (2)
0 Karma

gfuente
Motivator

Hello

I think that the only solution is modify the way the log is written. The UF keeps a hash of the header of the file to mark that file as "known", so if by any means the header of the file is modified, the UF will reindex the whole file from the beggining.

Regards

0 Karma

gfuente
Motivator

Hello, As far as i know it works that way.
So, if you are an Enterprise Customer you should fill a P4 case with your feature request.

0 Karma

sanjibdhar
Engager

We cannot change the way the log is written, i am surprised that splunk is unable give solution for that. If splunk does not handle circular log then there should be a feature request for this implementation.

0 Karma