All Apps and Add-ons

Importing collectd csv data for consumption by Splunk_TA_linux (3412)

DUThibault
Contributor

I’m trying to use the Splunk_TA_linux app (3412) with an old system (CentOS 5 vintage) as the target. Getting collectd to send its observations to Splunk is problematic (the collectd version is too old and I’m limited as to what I can change on the target), so I’ve been forced to set up collectd to merely dump its data locally in csv format, and I intend to have the Universal Forwarder monitor the data dump directory. The problem is in converting the event formats into what Splunk_TA_linux expects, namely event types such as linux_collectd_cpu , linux_collectd_memory, and so forth. I think I need to define a bunch of new sourcetypes, which will manipulate the events to transform them into the various event types expected. The forwarder is limited to INDEXED_EXTRACTIONS, but that should be enough.

Collectd has been configured to monitor several system metrics, and uses its csv plugin for output. The csv files go in the /var/collectd/csv folder. Collectd then creates a single subfolder, named using <hostname> (in this case, sv3vm5b.etv.lab ). There are then a bunch of subfolders for the various metrics: cpu-0, cpu-1, cpu-2, cpu-3, df, disk-vda, disk-vda1, disk-vda2, interface, irq, load, memory, processes, processes-all, swap, tcp-conns-22-local, tcp-conns-111-local, tcp-conns-698-local, tcp-conns-2207-local, tcp-conns-2208-local, tcp-conns-8089-local, uptime . The cpu-* folders are tracking several cpu metrics ( idle, interrupt, nice, softirq, steal, system, user, wait ). The first metric (CPU idle time) generates daily files, e.g. cpu-idle-2017-12-12, cpu-idle-2017-12-13 , etc. This pattern is the same for each metric. The contents of cpu-idle-<date> are:

epoch,value
1513025715,491259

1513025725,492242
...

Again, this pattern is the same for the other files: a header line listing the fields (although the names are pretty generic), then regular measurements consisting of a Unix timestamp followed by one to three integer or floating-point values. What the collectd input plugins measure is documented on the collectd wiki.

In collectd JSON or Graphite mode, the Splunk source type is linux:collectd:http:json or linux:collectd:graphite , the event type is linux_collectd_cpu and the data model is "ITSI OS Model Performance.CPU". Splunk_TA_linux's eventtypes.conf
ties linux_collectd_cpu to the two source types, so this gives rise to a first question: Will Splunk_TA_linux's eventtypes.conf need tweaking?

Assuming I set the forwarder to monitoring /var/collectd/csv/*/cpu-*/cpu-idle-* (can I specify paths using jokers like that?), I could then set the source type for those daily files as a custom type. The process would be repeated for the various other collectd files and folders, resulting in a slew of custom source types.

source type: collectd_csv_cpu_idle
dest app: Search & Reporting (should this be Splunk_TA_linux ?)
category: Metrics
indexed extractions: csv
timestamp: auto (this will recognise a Unix timestamp, right?)
field delimiter: comma
quote character: double quote (unused)
File preamble: ^epoch,value$
Field names: custom

…and that’s where I’m stumped. This expects a comma-separated list of field names. Is the first one _time or is that assumed? The “ITSI OS Model Performance.CPU” documentation has no fields for the jiffy counts ( cpu-idle, -interrupt, -nice, -softirq, -steal, -system, -user, -wait are reporting the number of jiffies spent in each of the possible CPU states, respectively idle, IRQ, nice, softIRQ, steal, system, user, wait-IO ) but does have cpu_time and cpu_user_percent fields. Isn’t there supposed to be a correspondence? Is Splunk_TA_linux further transforming the collectd inputs to fit them to the data models, so that I need more than just INDEXED_EXTRACTIONS ? And what about those fields that can only be extracted from the source paths, like the host ( sv3vm5b.etv.lab ) and number of CPUs, for instance?

0 Karma
1 Solution

DUThibault
Contributor

I consider this solved now. I use the csv plugin to write the metrics to a local directory (cleaned by weekly cron job that deletes older files), and i have a Splunk Universal Forwarder massage the events into the linux:collectd:graphite format before sending them to the indexer/search head as such. Contact me for the details.

View solution in original post

0 Karma

DUThibault
Contributor

I consider this solved now. I use the csv plugin to write the metrics to a local directory (cleaned by weekly cron job that deletes older files), and i have a Splunk Universal Forwarder massage the events into the linux:collectd:graphite format before sending them to the indexer/search head as such. Contact me for the details.

0 Karma

DUThibault
Contributor

Trying a different tack. I saw with https://answers.splunk.com/answers/593409/transformsconf-wont-let-me-change-the-sourcetype.html that I can change the sourcetype of events, so I figured I would use the csv source (e.g. .../cpu-*/cpu-nice-* ), transform its _raw data into linux:collectd:graphite format, and switch its sourcetype from collectd_csv_cpu_nice to linux:collectd:graphite

But it seems I failed, as the Splunk instance continues to receive just collectd_csv_cpu_nice data in the untransformed format.

To be clear, the csv files have lines like this:

<unix_timestamp>,<cpu_nice_jiffies>

whereas linux:collectd:graphite has lines like this:

<host>.cpu-<cpu>.cpu-idle.value <cpu_nice_jiffies> <unix_timestamp>

(Actually it expects percentages in floating point, but my old system (collectd 4.10) cannot supply that, only integer jiffy counts. I'm sure Splunk_TA_linux won't mind...much.)

So I added to props.conf and transform.conf on the Splunk instance and on the Forwarder:

props.conf

[collectd_csv_cpu_nice]
DATETIME_CONFIG =
HEADER_FIELD_LINE_NUMBER = 1
INDEXED_EXTRACTIONS = csv
NO_BINARY_CHECK = true
SHOULD_LINEMERGE = false
TIME_FORMAT = %s
category = Metrics
description = collectd CSV cpu-nice metric
disabled = false
pulldown_type = 1
REPORT-COLLECTD-CSV-CPU-NUMBER = TRANSFORM-COLLECTD-CSV-CPU-NUMBER
REPORT-COLLECTD-CSV-CPU-NICE = REPORT-COLLECTD-CSV-CPU-NICE
REPORT-COLLECTD-CSV-CPU-NICE-PAYLOAD = TRANSFORM-COLLECTD-CSV-CPU-NICE-PAYLOAD
REPORT-COLLECTD-CSV-CPU-NICE-SOURCETYPE = TRANSFORM-COLLECTD-CSV-CPU-NICE-SOURCETYPE

transforms.conf

&num; Extracts the CPU number from the source's enclosing directory name
[TRANSFORM-COLLECTD-CSV-CPU-NUMBER]
FORMAT = cpu::$1
REGEX = ^.*/cpu-([0-9]+)/
SOURCE_KEY = source

&num; Overall input format
[REPORT-COLLECTD-CSV-CPU-NICE]
DELIMS = ","
FIELDS = "unix_timestamp","cpu_nice_jiffies"

&num; Rewrites the _raw line to conform to linux:collectd:graphite format
[TRANSFORM-COLLECTD-CSV-CPU-NICE-PAYLOAD]
REGEX = (.*?)
FORMAT = _raw::$host.cpu-$cpu.cpu-idle.value $cpu_nice_jiffies $unix_timestamp

&num; Changes the sourcetype to linux:collectd:graphite
[TRANSFORM-COLLECTD-CSV-CPU-NICE-SOURCETYPE]
DEST_KEY = MetaData:Sourcetype
REGEX = (.*?)
FORMAT = sourcetype::linux:collectd:graphite

I probably wrote the TRANSFORM-COLLECTD-CSV-CPU-NICE-PAYLOAD FORMAT line all wrong. Help?

0 Karma

DUThibault
Contributor

See https://answers.splunk.com/answers/615924/ for the rest of the solution. At this point I can get the collectd csv data into Splunk as sourcetype linux:collectd:graphite; my remaining problems have to do with collectd itself.

0 Karma

DUThibault
Contributor

Some progress.

source type: collectd_csv_cpu_idle
dest app: Search & Reporting
category: Custom (would Metrics be better?)
indexed extractions: csv
timestamp:
extraction: Advanced
time zone: auto
timestamp format: %s
timestamp fields: (blank)
Delimited settings:
field delimiter: comma
quote character: double quote (unused)
File preamble: (blank)
Field names: Line...
Field names on line number: 1
Advanced:
SHOULD_LINEMERGE: false (was true by default but since csv and collectd_http use false this makes more sense)

Then defined some extractions and transformations.

REPORT-COLLECTD-CSV-CPU-IDLE transformation
type: delimiter-based
delimiters: ","
field list: "unix_timestamp","cpu_idle_jiffies"
source key: _raw

TRANSFORM-COLLECTD-CSV-CPU-NUMBER transformation
type: regex-based
regular expression: ^.*/cpu-([0-9]+)/
format: cpu::$1
source key: source

collectd_csv_cpu_idle : REPORT-COLLECTD-CSV-CPU-IDLE extraction
extraction/transform: REPORT-COLLECTD-CSV-CPU-IDLE

collectd_csv_cpu_idle : REPORT-COLLECTD-CSV-CPU-NUMBER extraction
extraction/transform: TRANSFORM-COLLECTD-CSV-CPU-NUMBER

The collectd file header was still getting through, so based on answer 586952 I've tried copying the Splunk instance's /opt/splunk/etc/apps/search/local/props.conf and transforms.conf to the universal forwarder's /opt/splunkforwarder/etc/apps/_server_app_<server class>/local/ Since collectd generates new files every day, we'll know tomorrow if this has gotten rid of the headers being read as data (event _raw string "epoch,value"). I can readily extend this pattern of sourcetypes, transformations and extractions to map the rest of the collectd data ( df , interface , irq and so on).

Now the problem is how to map this into the CIM. According to http://docs.splunk.com/Documentation/AddOns/released/Linux/Configure2 for instance, the Splunk Add-on for Linux expects the sourcetype linux:collectd:http:json but this does not appear in my list of sourcetypes, so I can't even inspect it to know what's in it.

0 Karma
Get Updates on the Splunk Community!

New This Month in Splunk Observability Cloud - Metrics Usage Analytics, Enhanced K8s ...

The latest enhancements across the Splunk Observability portfolio deliver greater flexibility, better data and ...

Alerting Best Practices: How to Create Good Detectors

At their best, detectors and the alerts they trigger notify teams when applications aren’t performing as ...

Discover Powerful New Features in Splunk Cloud Platform: Enhanced Analytics, ...

Hey Splunky people! We are excited to share the latest updates in Splunk Cloud Platform 9.3.2408. In this ...