11-24-2016 11:07:24.347 +0800 INFO TcpOutputProc - Closing stream for idx=172.16.1.81:9997
11-24-2016 11:07:24.348 +0800 INFO TcpOutputProc - Connected to idx=172.16.1.82:9997 using ACK.
11-24-2016 11:07:38.544 +0800 INFO TailReader - Archive file='/data/tutorialdata.zip' updated less than 10000ms ago, will not read it until it stops changing. File size=0
11-24-2016 11:07:48.598 +0800 INFO TailReader - Archive file='/data/tutorialdata.zip' has stopped changing, will read it now.
11-24-2016 11:07:48.598 +0800 INFO ArchiveProcessor - Handling file=/data/tutorialdata.zip
11-24-2016 11:07:48.598 +0800 INFO ArchiveProcessor - new tailer already processed path=/data/tutorialdata.zip
11-24-2016 11:07:54.207 +0800 INFO TcpOutputProc - Closing stream for idx=172.16.1.82:9997
11-24-2016 11:07:54.207 +0800 INFO TcpOutputProc - Connected to idx=172.16.1.81:9997 using ACK.
1. the forwarder has already handled the file. How can we check if it successfully forwards it to the indexer cluster?
2. the forwarder is continuing to change the connected indexers. Is it normal or an issue of the communication between the forwarder and indexers?
Thanks for the reply. Glad to know that changing connected indexer is a normal behavior, so it's easy to troubleshoot this issue. We tried other file price.csv.zip and run the search * source="/data/price.csv.zip". IT WORKS. Therefore we think it is the issue about the file.
Actually, when we create the indexers in the cluster, we clone a previous distrubuted index where we had forwarded the tutorialdata.zip. Although we remove all the database on the new indexer, will it save the hash or something else to mark the file forwarded? When we forward tutuorialdata.zip again, the indexer will ignore it by checking the hash？
If it is, how can we clean the hash records to make the new indexer working for a duplicate file?