it doesn't work.
So the working solution is:
fileWriter.append(
Arrays.asList(entry.getProperties().getCellId(),
c1.get(0),c1.get(1),
c2.get(0),c2.get(1),
c3.get(0),c3.get(1),
c4.get(0),c4.get(1)
)
.stream().map(Object::toString).collect(Collectors.joining(","))+"\r\n"
And extra movement: add header to the file.
Then splunk does what expected:
1. reads file line by line
2. doesn't try to put first line as a header
3. correctly splits fields not ignoring \t. When splunk did put all line into the first field it even displays in UI that '\t' are between values.
weird!
... View more