I was under the impression that the same named file would be imported if the file size change if you implement the Monitor technique. This would retain history, as long as each record has a timestamp.
An alternate solution would be to create a file w/ a timestamp on the end (and timestamp for each record), monitor the file using a wildcard (myfile*.csv), setup the file as a csv file and set that file to a particular sourcetype for easy reporting in Splunk.
If you can't put a timestamp on each record I think Splunk uses the imported time as the _time if I recall correctly. If that will work for you that may be an option as well.
1) Generate csv file w/ timestamp on end and each record. Technique will vary based on source system etc. I implemented this command line on the crontab to measure disk usage once per day. This command line will create file w/ a timestamp on the end, add a header lne and add a timestamp to the front of each row.
echo "Timestamp,Filesystem,Used,Available" >> //apps/wcm-splunk/work/crd/log/prod/diskWatcher_"$(date +'%Y%m%d')".log; df -P | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print","$1","$3","$4 }' | gawk '{ print strftime("%Y-%m-%d %H:%M:%S"), $0 }' >> //apps/wcm-splunk/work/crd/log/prod/diskWatcher_"$(date +'%Y%m%d')".log
2) Monitor the file in inputs.conf:
[monitor:///apps/wcm-splunk/work/crd/prod/*myFile*csv]
sourcetype = mySourcetype
disabled = false
index = myIndex
3) Set the file up as csv and specific sourcetype in props.config:
[mySourcetype]
NO_BINARY_CHECK = 1
pulldown_type = 1
HEADER_MODE = firstline
FIELD_DELIMITER=,
FIELD_QUOTE="
TIME_FORMAT=%b %d %Y %H:%M%p
TIMESTAMP_FIELDS=TimeStamp
... View more