Splunk IT Service Intelligence

ingest CSV file into metric index

patelmc
Explorer

I have following data file which I want to ingest as metrics data. Is it possible?

Node_Name, IfIndex, date_time, ifInOctets, ifOutOctets, ifInErrors, ifOutErrors, locIfCarTrans, locIfInCRC, locIfOutputQueueDrops, locIfInputQueueDrops, IfName, IfType, IfSpeed, IfAlias
server1.com,066,20190821 13:01:10,0,350782,0,0,0,0,0,0,Gi2/42,ethernetCsmacd,1 Gbps, eth6-1.1.1.1
server2.com,051,20190821 13:01:10,0,0,0,0,0,0,0,0,Gi2/3,ethernetCsmacd,1 Gbps, Gig1.6-Private-intf
server3.com,111,20190821 13:01:25,0,0,0,0,0,0,0,0,Gi3/39,ethernetCsmacd,1 Gbps, ETH0 decom
server4.com,003,20190821 13:01:30,15179,4690,0,0,0,0,0,0,Gi0/1,ethernetCsmacd,40 Mbps, Gi0/2.3412 : MetroE 40M

Notice, data_time format is yyyymmdd HH:MM:SS format and its in 3rd column. NOT in first column.
metric names are ifInOctets, ifOutOctets, ifInErrors, ifOutErrors, locIfCarTrans, locIfInCRC, locIfOutputQueueDrops, locIfInputQueueDrops
remaining other fields Node_Name, IfIndex, AND IfName, IfType, IfSpeed, IfAlias need to use as dimensions.

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Your CSV is not in the expected format. See https://docs.splunk.com/Documentation/Splunk/7.3.1/Metrics/GetMetricsInOther#Get_metrics_in_from_fil....

Another method is to define a log-to-metric sourcetype. Go to Settings->Source types and create a new type. Be sure to choose "Log to Metrics" as the Category. Then click on the Metrics tab and enter your measures and dimensions. Next, create props and transforms for your sourcetype to extract the fields.

---
If this reply helps you, Karma would be appreciated.

patelmc
Explorer

Also, Do I need to add TIME_FORMAT = %Y%m%d %H:%M:%S line in props.conf as you indicated earlier?

0 Karma

richgalloway
SplunkTrust
SplunkTrust

It's a best practice to always include TIME_FORMAT along with TIME_PREFIX, MAX_TIMESTAMP_LOOKAHEAD, SHOULD_LINEMERGE, LINE_BREAKER, and TRUNCATE in all props.conf stanzas.

---
If this reply helps you, Karma would be appreciated.
0 Karma

patelmc
Explorer

So in my case what should be the prefix. I used ^ as prefix but it did not like it.

TIME_PREFIX = ^
TIME_FORMAT = %Y%m%d %H:%M:%S

I was seeing following errors in splunkd.conf. So I removed both TIME_PREFIX and TIME_FORMAT and errors are stopped. But I still don't get any data into this metrics index. Since I see such errors, I believe data are coning in to indexer so there in problem with that. But probably data are being dropped. I do not see any messages related to this index in splunkd.conf. I can see files are processed on forwarder. Any clue?

08-22-2019 21:01:04.950 +0000 ERROR AggregatorMiningProcessor - Uncaught exception in Aggregator, skipping an event: Can't open DateParser XML configuration file "^": No such file or directory - data_source="/Poller/Output/snmpcoldump_metrics/if_20190822.210101.18589", data_host="pol.new.test.com", data_sourcetype="5sec_poller_interface_metrics"

0 Karma

richgalloway
SplunkTrust
SplunkTrust

TIME_PREFIX = ^ is correct in this case. I can't explain the error or why data is not indexed.

---
If this reply helps you, Karma would be appreciated.
0 Karma

patelmc
Explorer

This issue is resolved now. very odd.
I had to do the following.
1) configure props.conf and transforms.conf on forwarder.
2) Removed the stanzas from props.conf and transforms.conf on indexer.

I can see metircs data into metric index now.

0 Karma

patelmc
Explorer

Hi Rich,

props.conf on indexer server
[5sec_poller_interface_metrics]
DATETIME_CONFIG =
INDEXED_EXTRACTIONS = csv
LINE_BREAKER = ([\r\n]+)
METRIC-SCHEMA-TRANSFORMS = metric-schema:5sec_poller_interface_metrics_1566415584485
NO_BINARY_CHECK = true
category = Log to Metrics
description = 5sec_poller_interface_metrics
pulldown_type = 1

transforms.conf
[metric-schema:5sec_poller_interface_metrics_1566415584485]
METRIC-SCHEMA-MEASURES = ifInOctets,ifOutOctets,ifInErrors,ifOutErrors,locIfCarTrans,locIfInCRC,locIfOutputQueueDrops,locIfInputQueueDrops

0 Karma

richgalloway
SplunkTrust
SplunkTrust

That looks like it would work. Verify the fields are extracted properly by commenting-out the METRIC-SCHEMA-TRANSFORMS line in props.conf and sending the data to an events index.

---
If this reply helps you, Karma would be appreciated.
0 Karma

patelmc
Explorer

In that case which index gets the data?

0 Karma

richgalloway
SplunkTrust
SplunkTrust

The index specified in inputs,conf, which must be a metrics index.

---
If this reply helps you, Karma would be appreciated.
0 Karma

patelmc
Explorer

Thanks Rich,

to simplify, I re-arranged columns as shown below with time stamp as first field, then metrics and then dimensions.
I created sourcetype log_to_monitor and used ifOutOctets,ifInErrors,ifOutErrors,locIfCarTrans,locIfInCRC
,locIfOutputQueueDrops,locIfInputQueueDrops as metrics. I added monitor on splunk forwarder.
However, I am not seeing any data into metric index.
is this due to date_time format coming from log files?

date_time,ifInOctets,ifOutOctets,ifInErrors,ifOutErrors,locIfCarTrans,locIfInCRC
,locIfOutputQueueDrops,locIfInputQueueDrops,Node_Name,IfIndex,IfName,IfType,IfSpeed,IfAlias
20190821 17:53:10,300,348,0,0,0,0,0,0,server1.com,001,Gi0/0/0,ethernetCsmacd,40 Mbps,Link to zzz : Gi2/2/0.3421: MetroE 40M : Vlan3424 :
20190821 17:53:40,57611,23173,0,0,0,0,0,0,server2.com,001,Gi0/0/0,ethernetCsmacd,20 Mbps,Link to yyy : Vlan3596 : MetroE 20M : Vlan3596 : IPC
20190821 17:53:40,62129,36565,0,0,0,0,0,0,server3.net,001,Gi0/0/0,ethernetCsmacd,40 Mbps,BC : Link [w/Nano] to xxx : G0/0/10 : iLite

0 Karma

richgalloway
SplunkTrust
SplunkTrust

If you have TIME_FORMAT = %Y%m%d %H:%M:%S in your props.conf file for that sourcetype then the date_time column should not be a factor. What are your props.conf and transforms.conf settings?

---
If this reply helps you, Karma would be appreciated.
0 Karma
Get Updates on the Splunk Community!

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...