I have a console app which reads data from table storage, and writes it out onto a csv file. I monitor each of the output folder, I checked to see if the data is being uploaded properly which it does. However there is disconnected data inside the index when I look at all the records. Reason I know is because the records in the csv file doesn't match the records on splunk.
An entry contains 18 fields some of the entries are being split in the middle of the entry.
Example Entry
12/15/2015, Name, ID, Number, Guid ....
Splunk logs it as 2 seperate entry
12/15/2015,Name,ID,
Number,Guid ...
The console app runs every day once at midnight and not all entries are being malformed just a few of them. The strange thing is that the data is still all there it just some of them are separated. Anyone know of a work around or is this a bug?
I was thinking of locking the file until it's finish writing but I'm not sure how splunk would react to fileshare locking.
I don't think this is a bug. I would avoid locking the file in general, as it may have unexpected performance impacts.
What are the settings in the inputs.conf
stanza that is monitoring the output folder? I would suggest this:
[monitor://yourdirectorypathhere]
index = theindexname
sourcetype = csv
ignoreOlderThan = 30d
Or, you might find a pretrained sourcetype that better fits your data here: List of pretrained sourcetypes
If you are not cleaning out the older files (which you should), the ignoreOlderThan
will help Splunk's performance if the directory becomes full of older files that have already been indexed and that will never be updated.
If you don't want to use the csv
sourcetype, you may need to place a props.conf
file on your indexer that explicitly sets the parsing rules for the sourcetype that you choose.