Hello,
We have several CSV files with 300K lines, where I have a timestamp and the columns describing numeric KPIs, with header.
We need to visualize it with annotations, which we basically managed. The first try was with the lookup, which was slow and the way of writing searches was a bit difficult. Then we uploaded the file into the index we have, which improved both the speed and made searching easier.
Now, we would still wish a bit better performance. I came across of the metric index and metric_csv sourcetype.
Would that be much faster than the normal index with the sourcetype="csv"?
I mean showing like month of data (100K lines) on the line chart with 5 KPIs?
And the main Question:
- As per the documentation there is a following format for the metric_csv required:
"metric_timestamp","metric_name","_value","process_object_guid"
"1509997011","process.cpu.avg","2563454144","dbd1414b-378e-48bd-9735-bc2bab1e58fa"
"1509997011","process.cpu.min","2563454144","dbd1414b-378e-48bd-9735-bc2bab1e58fa"
"1509997011","process.cpu.max","2563454144","dbd1414b-378e-48bd-9735-bc2bab1e58fa"
Which is basically not what we have. As mentioned, we have a header, the first column is a timestamp and then there are numeric KPIs in the next columns with the values in the rows. So, the above example would look as follows:
"metric_timestamp","process.cpu.avg","process.cpu.min","process.cpu.max"
"1509997011","2563454144","2563454144","2563454144"
We do not have any "dimensions.
Is there any way to read this form into the metric index?
Or do we have to transfor all the csv files we have?
And of course the question before, would the metric index be much faster in this case as the normal index with the sourcetype "csv", which we have at the moment? I mean, is it worth at all performance wise?
Kind Regards,
Kamil