I assume that each CSV file is ingested as one source. In that case, a search by source would retrieve all entries in that file. All you need is to add back headers. But keep in mind: You cannot...
See more...
I assume that each CSV file is ingested as one source. In that case, a search by source would retrieve all entries in that file. All you need is to add back headers. But keep in mind: You cannot get the original order of headers back. Headers will be in ASCII order. You cannot get the original row order back. Rows will be in matrix ASCII order. If your original headers contain patterns incompatible with Splunk header standards, those headers will be changed to Splunk headers. Here, I will use an ordinary CSV as example: c, b, a
1,2,3
4,,5
,6,7
8,, You can use the following index=myindex sourcetype=mysourcetype source=mycsv
| tojson
| appendpipe
[foreach *
[eval header = mvappend(header, "<<FIELD>>")]]
| fields header
| where isnotnull(header)
| stats values(header) as header values(_raw) as json
| foreach json mode=multivalue
[ eval csv = mvappend(csv, "\"" . mvjoin(mvmap(header, if(isnull(spath(<<ITEM>>, header)),"", spath(<<ITEM>>, header))), "\",\"") . "\"")]
| eval csv = mvjoin(header, ",") . "
" . mvjoin(csv, "
") The mock data will give a csv field with the following value: a,b,c
"3","2","1"
"5","","4"
"7","6",""
"","","8" As explained above, much depends on the original CSV. For example, I took precaution to quote each "cell" even though your original CSV may not use quotation marks. I did not quote headers, even though your original CSV may have used quotation marks. With certain column names and/or cell values, additional coding is needed. You can play with the following data emulation and compare with real data. | makeresults format=csv data="c, b, a
1,2,3
4,,5
,6,7
8,,"
| foreach *
[eval _raw = if(isnull(_raw), "", _raw . ",") . if(isnull(<<FIELD>>), "", <<FIELD>>)]
``` the above emulates
index=myindex sourcetype=mysourcetype source=mycsv
```