All Apps and Add-ons

Importing collectd csv data for consumption by Splunk_TA_linux (3412)


I’m trying to use the Splunk_TA_linux app (3412) with an old system (CentOS 5 vintage) as the target. Getting collectd to send its observations to Splunk is problematic (the collectd version is too old and I’m limited as to what I can change on the target), so I’ve been forced to set up collectd to merely dump its data locally in csv format, and I intend to have the Universal Forwarder monitor the data dump directory. The problem is in converting the event formats into what Splunk_TA_linux expects, namely event types such as linux_collectd_cpu , linux_collectd_memory, and so forth. I think I need to define a bunch of new sourcetypes, which will manipulate the events to transform them into the various event types expected. The forwarder is limited to INDEXED_EXTRACTIONS, but that should be enough.

Collectd has been configured to monitor several system metrics, and uses its csv plugin for output. The csv files go in the /var/collectd/csv folder. Collectd then creates a single subfolder, named using <hostname> (in this case, sv3vm5b.etv.lab ). There are then a bunch of subfolders for the various metrics: cpu-0, cpu-1, cpu-2, cpu-3, df, disk-vda, disk-vda1, disk-vda2, interface, irq, load, memory, processes, processes-all, swap, tcp-conns-22-local, tcp-conns-111-local, tcp-conns-698-local, tcp-conns-2207-local, tcp-conns-2208-local, tcp-conns-8089-local, uptime . The cpu-* folders are tracking several cpu metrics ( idle, interrupt, nice, softirq, steal, system, user, wait ). The first metric (CPU idle time) generates daily files, e.g. cpu-idle-2017-12-12, cpu-idle-2017-12-13 , etc. This pattern is the same for each metric. The contents of cpu-idle-<date> are:



Again, this pattern is the same for the other files: a header line listing the fields (although the names are pretty generic), then regular measurements consisting of a Unix timestamp followed by one to three integer or floating-point values. What the collectd input plugins measure is documented on the collectd wiki.

In collectd JSON or Graphite mode, the Splunk source type is linux:collectd:http:json or linux:collectd:graphite , the event type is linux_collectd_cpu and the data model is "ITSI OS Model Performance.CPU". SplunkTAlinux's eventtypes.conf
ties linux_collectd_cpu to the two source types, so this gives rise to a first question: Will SplunkTAlinux's eventtypes.conf need tweaking?

Assuming I set the forwarder to monitoring /var/collectd/csv/*/cpu-*/cpu-idle-* (can I specify paths using jokers like that?), I could then set the source type for those daily files as a custom type. The process would be repeated for the various other collectd files and folders, resulting in a slew of custom source types.

source type: collectd_csv_cpu_idle
dest app: Search & Reporting (should this be Splunk_TA_linux ?)
category: Metrics
indexed extractions: csv
timestamp: auto (this will recognise a Unix timestamp, right?)
field delimiter: comma
quote character: double quote (unused)
File preamble: ^epoch,value$
Field names: custom

…and that’s where I’m stumped. This expects a comma-separated list of field names. Is the first one _time or is that assumed? The “ITSI OS Model Performance.CPU” documentation has no fields for the jiffy counts ( cpu-idle, -interrupt, -nice, -softirq, -steal, -system, -user, -wait are reporting the number of jiffies spent in each of the possible CPU states, respectively idle, IRQ, nice, softIRQ, steal, system, user, wait-IO ) but does have cpu_time and cpu_user_percent fields. Isn’t there supposed to be a correspondence? Is Splunk_TA_linux further transforming the collectd inputs to fit them to the data models, so that I need more than just INDEXED_EXTRACTIONS ? And what about those fields that can only be extracted from the source paths, like the host ( sv3vm5b.etv.lab ) and number of CPUs, for instance?

0 Karma

Re: Importing collectd csv data for consumption by Splunk_TA_linux (3412)


Some progress.

source type: collectd_csv_cpu_idle
dest app: Search & Reporting
category: Custom (would Metrics be better?)
indexed extractions: csv
extraction: Advanced
time zone: auto
timestamp format: %s
timestamp fields: (blank)
Delimited settings:
field delimiter: comma
quote character: double quote (unused)
File preamble: (blank)
Field names: Line...
Field names on line number: 1
SHOULDLINEMERGE: false (was true by default but since csv and collectdhttp use false this makes more sense)

Then defined some extractions and transformations.

type: delimiter-based
delimiters: ","
field list: "unix_timestamp","cpu_idle_jiffies"
source key: _raw

type: regex-based
regular expression: ^.*/cpu-([0-9]+)/
format: cpu::$1
source key: source

collectdcsvcpu_idle : REPORT-COLLECTD-CSV-CPU-IDLE extraction
extraction/transform: REPORT-COLLECTD-CSV-CPU-IDLE

collectdcsvcpu_idle : REPORT-COLLECTD-CSV-CPU-NUMBER extraction

The collectd file header was still getting through, so based on answer 586952 I've tried copying the Splunk instance's /opt/splunk/etc/apps/search/local/props.conf and transforms.conf to the universal forwarder's /opt/splunkforwarder/etc/apps/_server_app_<server class>/local/ Since collectd generates new files every day, we'll know tomorrow if this has gotten rid of the headers being read as data (event _raw string "epoch,value"). I can readily extend this pattern of sourcetypes, transformations and extractions to map the rest of the collectd data ( df , interface , irq and so on).

Now the problem is how to map this into the CIM. According to for instance, the Splunk Add-on for Linux expects the sourcetype linux:collectd:http:json but this does not appear in my list of sourcetypes, so I can't even inspect it to know what's in it.

0 Karma

Re: Importing collectd csv data for consumption by Splunk_TA_linux (3412)


Trying a different tack. I saw with that I can change the sourcetype of events, so I figured I would use the csv source (e.g. .../cpu-*/cpu-nice-* ), transform its raw data into linux:collectd:graphite format, and switch its sourcetype from `collectdcsvcpunicetolinux:collectd:graphite`

But it seems I failed, as the Splunk instance continues to receive just collectd_csv_cpu_nice data in the untransformed format.

To be clear, the csv files have lines like this:


whereas linux:collectd:graphite has lines like this:

<host>.cpu-<cpu>.cpu-idle.value <cpunicejiffies> <unix_timestamp>

(Actually it expects percentages in floating point, but my old system (collectd 4.10) cannot supply that, only integer jiffy counts. I'm sure Splunk_TA_linux won't mind...much.)

So I added to props.conf and transform.conf on the Splunk instance and on the Forwarder:


category = Metrics
description = collectd CSV cpu-nice metric
disabled = false
pulldown_type = 1


&num; Extracts the CPU number from the source's enclosing directory name
FORMAT = cpu::$1
REGEX = ^.*/cpu-([0-9]+)/
SOURCE_KEY = source

&num; Overall input format
DELIMS = ","
FIELDS = "unixtimestamp","cpunice_jiffies"

&num; Rewrites the raw line to conform to linux:collectd:graphite format
REGEX = (.*?)
FORMAT = _raw::$host.cpu-$cpu.cpu-idle.value $cpu
nicejiffies $unixtimestamp

&num; Changes the sourcetype to linux:collectd:graphite
DEST_KEY = MetaData:Sourcetype
REGEX = (.*?)
FORMAT = sourcetype::linux:collectd:graphite

I probably wrote the TRANSFORM-COLLECTD-CSV-CPU-NICE-PAYLOAD FORMAT line all wrong. Help?

0 Karma

Re: Importing collectd csv data for consumption by Splunk_TA_linux (3412)


See for the rest of the solution. At this point I can get the collectd csv data into Splunk as sourcetype linux:collectd:graphite; my remaining problems have to do with collectd itself.

0 Karma

Re: Importing collectd csv data for consumption by Splunk_TA_linux (3412)


I consider this solved now. I use the csv plugin to write the metrics to a local directory (cleaned by weekly cron job that deletes older files), and i have a Splunk Universal Forwarder massage the events into the linux:collectd:graphite format before sending them to the indexer/search head as such. Contact me for the details.

View solution in original post

0 Karma