I am going to monitor file change on several servers,but I don't like the way of sharing directory .so I installed a sync software on these servers. When there is a file change,The change file was sync to a server for central management.And i use splunk to monitor the single server.After some time, i found that the index volume was overhead:the files was about 50M in total,but the index volume is 5G in a day. the files name did not change during the sync process.but everytime the sync process delete the oldfile and create a new file with the same name? Is this the reason of index overhead?
Are you using Montior or FSChange? If you are using Monitor than Splunk is grabbing the content of the file and indexing it. If the file changes, it grabs the new data. If you are using FSChange, then every time the file changes a new md5 is created and Splunk will log that.
I am using Monitor but everytime the content of files change,splunk grabs the whole file again.so the index volume run out quickly.how splunk identifies a file as a new file? the files monitored hold their name unchanged but everytime the files change,the file was deleted and created again.Is this the cause of problem?