I have few zip file (after extend is thound of csv files) in a folder, each zip file size is over 1GB.
I use monitor stanza monitor this folder but Splunk did not index these zip file.
Splunk 7.3.3 Standalone
[monitor://D:\zipfolder]
index =my_index
sourcetype = my_sourcetype
crcSalt = <SOURCE>
Any suggests ?
Thanks.
Unzip the files.
Splunk doesn't index the compressed files directly. It has to uncompress them into a temporary directory first.
That's probably where it is failing (do you have enough space for uncompression? Text files typically compress quite well, so it's not uncommon to need about ten times as much free space as the archive size for unpacking the archive).
To make sure your ingestion is going properly, just uncompress the files on your own before ingestion.
Guys, any thoughts about this?
Unzip these files or use batch mode (batch://.../*.zip) to input these zip files.
If you have a lot of mount zip files, write a shell scrip to unzip these zip files.
Unzip the files.
Yes. Finally, I use shell script unzip thousands of zip files.
I want to know what happened on Splunk.
ArchiveProcess can not unzip big zip file or unzip for big zip file is take a long time, and Splunk skip it ?
It have any limit for zip file when indexing ?
I don't know why Splunk wouldn't read the zip files. Perhaps, as you suggest, they're too big. Is there anything in the logs about it?
The same logs with each zip file.
12-15-2020 14:17:28.450 +0900 INFO ArchiveProcessor - Handling file=D:\logfile.zip
12-15-2020 14:17:28.450 +0900 INFO ArchiveProcessor - reading path=D:\logfile.zip (seek=0 len=1153047505)
12-15-2020 14:17:59.761 +0900 INFO ArchiveProcessor - Finished processing file 'D:\logfile.zip', removing from stats