I'm looking to come up with some configurations that filter out existing orders from files I (currently) manually copy to a local directory where Splunk then picks them up and indexes the order info for that hour.
Each file that comes out from the AS/400 has a total of all orders with various information on that order(CustomerNumber, PONumber, Date, Time, etc.) up to that particular hour.
Basically every time Splunk Picks up one of those files, I want it so that Splunk only indexes the NEW orders, rather than indexing the same order data from the previous hours. Otherwise, there will be a large amount of duplicate data being indexed.
Is there a way I can do this? Let me know if any information is needed to dig in to this further.
(1) Keep the same file name. In other words, overwrite the old file with the new file each hour.
(2) Make sure that the beginning of the file (up to the point of the new data) has not changed.
But if Splunk figures out that this is a different file, it will index it from the beginning, causing the duplication that you are trying to avoid.
There is no way for Splunk to compare inbound data with existing data before indexing. However, it is possible to "dedup" data being retreived during a search - although you have to do it explicitly with the uniq command.