<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Importing collectd csv data for consumption by Splunk_TA_linux (3412) in All Apps and Add-ons</title>
    <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Importing-collectd-csv-data-for-consumption-by-Splunk-TA-linux/m-p/342195#M41296</link>
    <description>&lt;P&gt;Some progress.&lt;/P&gt;

&lt;P&gt;source type: &lt;CODE&gt;collectd_csv_cpu_idle&lt;/CODE&gt;&lt;BR /&gt;
dest app: &lt;CODE&gt;Search &amp;amp; Reporting&lt;/CODE&gt;&lt;BR /&gt;
category: &lt;CODE&gt;Custom&lt;/CODE&gt;          (would &lt;CODE&gt;Metrics&lt;/CODE&gt; be better?)&lt;BR /&gt;
indexed extractions: &lt;CODE&gt;csv&lt;/CODE&gt;&lt;BR /&gt;
timestamp:&lt;BR /&gt;
extraction: &lt;CODE&gt;Advanced&lt;/CODE&gt;&lt;BR /&gt;
time zone: &lt;CODE&gt;auto&lt;/CODE&gt;&lt;BR /&gt;
timestamp format: &lt;CODE&gt;%s&lt;/CODE&gt;&lt;BR /&gt;
timestamp fields: (blank)&lt;BR /&gt;
Delimited settings:&lt;BR /&gt;
field delimiter: comma&lt;BR /&gt;
quote character: double quote (unused)&lt;BR /&gt;
File preamble: (blank)&lt;BR /&gt;
Field names: &lt;CODE&gt;Line...&lt;/CODE&gt;&lt;BR /&gt;
Field names on line number: &lt;CODE&gt;1&lt;/CODE&gt;&lt;BR /&gt;
Advanced:&lt;BR /&gt;
SHOULD_LINEMERGE: &lt;CODE&gt;false&lt;/CODE&gt;      (was &lt;CODE&gt;true&lt;/CODE&gt; by default but since csv and collectd_http use &lt;CODE&gt;false&lt;/CODE&gt; this makes more sense)&lt;/P&gt;

&lt;P&gt;Then defined some extractions and transformations.&lt;/P&gt;

&lt;P&gt;REPORT-COLLECTD-CSV-CPU-IDLE transformation&lt;BR /&gt;
type: &lt;CODE&gt;delimiter-based&lt;/CODE&gt;&lt;BR /&gt;
delimiters: &lt;CODE&gt;","&lt;/CODE&gt;&lt;BR /&gt;
field list: &lt;CODE&gt;"unix_timestamp","cpu_idle_jiffies"&lt;/CODE&gt;&lt;BR /&gt;
source key: &lt;CODE&gt;_raw&lt;/CODE&gt;&lt;/P&gt;

&lt;P&gt;TRANSFORM-COLLECTD-CSV-CPU-NUMBER transformation&lt;BR /&gt;
type: &lt;CODE&gt;regex-based&lt;/CODE&gt;&lt;BR /&gt;
regular expression: &lt;CODE&gt;^.*/cpu-([0-9]+)/&lt;/CODE&gt;&lt;BR /&gt;
format: &lt;CODE&gt;cpu::$1&lt;/CODE&gt;&lt;BR /&gt;
source key: &lt;CODE&gt;source&lt;/CODE&gt;&lt;/P&gt;

&lt;P&gt;collectd_csv_cpu_idle : REPORT-COLLECTD-CSV-CPU-IDLE extraction&lt;BR /&gt;
extraction/transform: &lt;CODE&gt;REPORT-COLLECTD-CSV-CPU-IDLE&lt;/CODE&gt;&lt;/P&gt;

&lt;P&gt;collectd_csv_cpu_idle : REPORT-COLLECTD-CSV-CPU-NUMBER extraction&lt;BR /&gt;
extraction/transform: &lt;CODE&gt;TRANSFORM-COLLECTD-CSV-CPU-NUMBER&lt;/CODE&gt;&lt;/P&gt;

&lt;P&gt;The collectd file header was still getting through, so based on answer 586952 I've tried copying the Splunk instance's &lt;CODE&gt;/opt/splunk/etc/apps/search/local/props.conf&lt;/CODE&gt; and &lt;CODE&gt;transforms.conf&lt;/CODE&gt; to the universal forwarder's &lt;CODE&gt;/opt/splunkforwarder/etc/apps/_server_app_&amp;lt;server class&amp;gt;/local/&lt;/CODE&gt; Since collectd generates new files every day, we'll know tomorrow if this has gotten rid of the headers being read as data (event _raw string "epoch,value"). I can readily extend this pattern of sourcetypes, transformations and extractions to map the rest of the collectd data ( &lt;CODE&gt;df&lt;/CODE&gt; , &lt;CODE&gt;interface&lt;/CODE&gt; , &lt;CODE&gt;irq&lt;/CODE&gt; and so on).&lt;/P&gt;

&lt;P&gt;Now the problem is how to map this into the CIM. According to &lt;A href="http://docs.splunk.com/Documentation/AddOns/released/Linux/Configure2" target="_blank"&gt;http://docs.splunk.com/Documentation/AddOns/released/Linux/Configure2&lt;/A&gt; for instance, the Splunk Add-on for Linux expects the sourcetype &lt;CODE&gt;linux:collectd:http:json&lt;/CODE&gt; but this does not appear in my list of sourcetypes, so I can't even inspect it to know what's in it.&lt;/P&gt;</description>
    <pubDate>Tue, 29 Sep 2020 17:42:47 GMT</pubDate>
    <dc:creator>DUThibault</dc:creator>
    <dc:date>2020-09-29T17:42:47Z</dc:date>
    <item>
      <title>Importing collectd csv data for consumption by Splunk_TA_linux (3412)</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Importing-collectd-csv-data-for-consumption-by-Splunk-TA-linux/m-p/342194#M41295</link>
      <description>&lt;P&gt;I’m trying to use the &lt;CODE&gt;Splunk_TA_linux&lt;/CODE&gt; app (3412) with an old system (CentOS 5 vintage) as the target. Getting &lt;CODE&gt;collectd&lt;/CODE&gt; to send its observations to Splunk is problematic (the &lt;CODE&gt;collectd&lt;/CODE&gt; version is too old and I’m limited as to what I can change on the target), so I’ve been forced to set up &lt;CODE&gt;collectd&lt;/CODE&gt; to merely dump its data locally in csv format, and I intend to have the Universal Forwarder monitor the data dump directory. The problem is in converting the event formats into what &lt;CODE&gt;Splunk_TA_linux&lt;/CODE&gt; expects, namely event types such as &lt;CODE&gt;linux_collectd_cpu&lt;/CODE&gt; , &lt;CODE&gt;linux_collectd_memory&lt;/CODE&gt;, and so forth. I think I need to define a bunch of new sourcetypes, which will manipulate the events to transform them into the various event types expected. The forwarder is limited to &lt;CODE&gt;INDEXED_EXTRACTIONS&lt;/CODE&gt;, but that should be enough.&lt;/P&gt;

&lt;P&gt;&lt;CODE&gt;Collectd&lt;/CODE&gt; has been configured to monitor several system metrics, and uses its &lt;CODE&gt;csv&lt;/CODE&gt; plugin for output. The &lt;CODE&gt;csv&lt;/CODE&gt; files go in the &lt;CODE&gt;/var/collectd/csv&lt;/CODE&gt; folder. &lt;CODE&gt;Collectd&lt;/CODE&gt; then creates a single subfolder, named using &lt;CODE&gt;&amp;lt;hostname&amp;gt;&lt;/CODE&gt; (in this case, &lt;CODE&gt;sv3vm5b.etv.lab&lt;/CODE&gt; ). There are then a bunch of subfolders for the various metrics: &lt;CODE&gt;cpu-0, cpu-1, cpu-2, cpu-3, df, disk-vda, disk-vda1, disk-vda2, interface, irq, load, memory, processes, processes-all, swap, tcp-conns-22-local, tcp-conns-111-local, tcp-conns-698-local, tcp-conns-2207-local, tcp-conns-2208-local, tcp-conns-8089-local, uptime&lt;/CODE&gt; . The &lt;CODE&gt;cpu-*&lt;/CODE&gt; folders are tracking several cpu metrics ( &lt;CODE&gt;idle, interrupt, nice, softirq, steal, system, user, wait&lt;/CODE&gt; ). The first metric (CPU idle time) generates daily files, e.g. &lt;CODE&gt;cpu-idle-2017-12-12, cpu-idle-2017-12-13&lt;/CODE&gt; , etc. This pattern is the same for each metric. The contents of &lt;CODE&gt;cpu-idle-&amp;lt;date&amp;gt;&lt;/CODE&gt; are:&lt;/P&gt;

&lt;BLOCKQUOTE&gt;
&lt;P&gt;epoch,value&lt;BR /&gt;
1513025715,491259&lt;BR /&gt;&lt;BR /&gt;
1513025725,492242&lt;BR /&gt;
 ...&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;

&lt;P&gt;Again, this pattern is the same for the other files: a header line listing the fields (although the names are pretty generic), then regular measurements consisting of a Unix timestamp followed by one to three integer or floating-point values. What the &lt;CODE&gt;collectd&lt;/CODE&gt; input plugins measure is documented on the &lt;A href="https://collectd.org/wiki/index.php/Plugin:CPU" title="collectd wiki" target="_blank"&gt;collectd wiki&lt;/A&gt;.&lt;/P&gt;

&lt;P&gt;In &lt;CODE&gt;collectd JSON&lt;/CODE&gt; or &lt;CODE&gt;Graphite&lt;/CODE&gt; mode, the Splunk source type is &lt;CODE&gt;linux:collectd:http:json&lt;/CODE&gt; or &lt;CODE&gt;linux:collectd:graphite&lt;/CODE&gt; , the event type is &lt;CODE&gt;linux_collectd_cpu&lt;/CODE&gt; and the data model is "ITSI OS Model Performance.CPU".  Splunk_TA_linux's &lt;CODE&gt;eventtypes.conf&lt;/CODE&gt; &lt;BR /&gt;
ties &lt;CODE&gt;linux_collectd_cpu&lt;/CODE&gt; to the two source types, so this gives rise to a first question: Will Splunk_TA_linux's &lt;CODE&gt;eventtypes.conf&lt;/CODE&gt; need tweaking?&lt;/P&gt;

&lt;P&gt;Assuming I set the forwarder to monitoring &lt;CODE&gt;/var/collectd/csv/*/cpu-*/cpu-idle-*&lt;/CODE&gt; (can I specify paths using jokers like that?), I could then set the source type for those daily files as a custom type. The process would be repeated for the various other &lt;CODE&gt;collectd&lt;/CODE&gt; files and folders, resulting in a slew of custom source types.&lt;/P&gt;

&lt;P&gt;source type: &lt;CODE&gt;collectd_csv_cpu_idle&lt;/CODE&gt;&lt;BR /&gt;
dest app: &lt;CODE&gt;Search &amp;amp; Reporting&lt;/CODE&gt; (should this be &lt;CODE&gt;Splunk_TA_linux&lt;/CODE&gt; ?)&lt;BR /&gt;
category: &lt;CODE&gt;Metrics&lt;/CODE&gt;&lt;BR /&gt;
indexed extractions: &lt;CODE&gt;csv&lt;/CODE&gt;&lt;BR /&gt;
timestamp: &lt;CODE&gt;auto&lt;/CODE&gt; (this will recognise a Unix timestamp, right?)&lt;BR /&gt;
field delimiter: &lt;CODE&gt;comma&lt;/CODE&gt;&lt;BR /&gt;
quote character: &lt;CODE&gt;double quote&lt;/CODE&gt; (unused)&lt;BR /&gt;
File preamble: &lt;CODE&gt;^epoch,value$&lt;/CODE&gt;&lt;BR /&gt;
Field names: &lt;CODE&gt;custom&lt;/CODE&gt;&lt;/P&gt;

&lt;P&gt;…and that’s where I’m stumped. This expects a comma-separated list of field names. Is the first one &lt;CODE&gt;_time&lt;/CODE&gt; or is that assumed? The “ITSI OS Model Performance.CPU” &lt;A href="https://docs.splunk.com/Documentation/ITSI/3.0.0/IModules/OSModuledatamodelreferencetable" title="ITSI OS Model Performance.CPU" target="_blank"&gt;documentation&lt;/A&gt; has no fields for the jiffy counts ( &lt;CODE&gt;cpu-idle, -interrupt, -nice, -softirq, -steal, -system, -user, -wait&lt;/CODE&gt; are reporting the number of jiffies spent in each of the possible CPU states, respectively &lt;CODE&gt;idle, IRQ, nice, softIRQ, steal, system, user, wait-IO&lt;/CODE&gt; ) but does have &lt;CODE&gt;cpu_time&lt;/CODE&gt; and &lt;CODE&gt;cpu_user_percent&lt;/CODE&gt; fields. Isn’t there supposed to be a correspondence? Is &lt;CODE&gt;Splunk_TA_linux&lt;/CODE&gt; further transforming the &lt;CODE&gt;collectd&lt;/CODE&gt; inputs to fit them to the data models, so that I need more than just &lt;CODE&gt;INDEXED_EXTRACTIONS&lt;/CODE&gt; ? And what about those fields that can only be extracted from the source paths, like the host ( &lt;CODE&gt;sv3vm5b.etv.lab&lt;/CODE&gt; ) and number of CPUs, for instance?&lt;/P&gt;</description>
      <pubDate>Tue, 29 Sep 2020 17:15:58 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Importing-collectd-csv-data-for-consumption-by-Splunk-TA-linux/m-p/342194#M41295</guid>
      <dc:creator>DUThibault</dc:creator>
      <dc:date>2020-09-29T17:15:58Z</dc:date>
    </item>
    <item>
      <title>Re: Importing collectd csv data for consumption by Splunk_TA_linux (3412)</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Importing-collectd-csv-data-for-consumption-by-Splunk-TA-linux/m-p/342195#M41296</link>
      <description>&lt;P&gt;Some progress.&lt;/P&gt;

&lt;P&gt;source type: &lt;CODE&gt;collectd_csv_cpu_idle&lt;/CODE&gt;&lt;BR /&gt;
dest app: &lt;CODE&gt;Search &amp;amp; Reporting&lt;/CODE&gt;&lt;BR /&gt;
category: &lt;CODE&gt;Custom&lt;/CODE&gt;          (would &lt;CODE&gt;Metrics&lt;/CODE&gt; be better?)&lt;BR /&gt;
indexed extractions: &lt;CODE&gt;csv&lt;/CODE&gt;&lt;BR /&gt;
timestamp:&lt;BR /&gt;
extraction: &lt;CODE&gt;Advanced&lt;/CODE&gt;&lt;BR /&gt;
time zone: &lt;CODE&gt;auto&lt;/CODE&gt;&lt;BR /&gt;
timestamp format: &lt;CODE&gt;%s&lt;/CODE&gt;&lt;BR /&gt;
timestamp fields: (blank)&lt;BR /&gt;
Delimited settings:&lt;BR /&gt;
field delimiter: comma&lt;BR /&gt;
quote character: double quote (unused)&lt;BR /&gt;
File preamble: (blank)&lt;BR /&gt;
Field names: &lt;CODE&gt;Line...&lt;/CODE&gt;&lt;BR /&gt;
Field names on line number: &lt;CODE&gt;1&lt;/CODE&gt;&lt;BR /&gt;
Advanced:&lt;BR /&gt;
SHOULD_LINEMERGE: &lt;CODE&gt;false&lt;/CODE&gt;      (was &lt;CODE&gt;true&lt;/CODE&gt; by default but since csv and collectd_http use &lt;CODE&gt;false&lt;/CODE&gt; this makes more sense)&lt;/P&gt;

&lt;P&gt;Then defined some extractions and transformations.&lt;/P&gt;

&lt;P&gt;REPORT-COLLECTD-CSV-CPU-IDLE transformation&lt;BR /&gt;
type: &lt;CODE&gt;delimiter-based&lt;/CODE&gt;&lt;BR /&gt;
delimiters: &lt;CODE&gt;","&lt;/CODE&gt;&lt;BR /&gt;
field list: &lt;CODE&gt;"unix_timestamp","cpu_idle_jiffies"&lt;/CODE&gt;&lt;BR /&gt;
source key: &lt;CODE&gt;_raw&lt;/CODE&gt;&lt;/P&gt;

&lt;P&gt;TRANSFORM-COLLECTD-CSV-CPU-NUMBER transformation&lt;BR /&gt;
type: &lt;CODE&gt;regex-based&lt;/CODE&gt;&lt;BR /&gt;
regular expression: &lt;CODE&gt;^.*/cpu-([0-9]+)/&lt;/CODE&gt;&lt;BR /&gt;
format: &lt;CODE&gt;cpu::$1&lt;/CODE&gt;&lt;BR /&gt;
source key: &lt;CODE&gt;source&lt;/CODE&gt;&lt;/P&gt;

&lt;P&gt;collectd_csv_cpu_idle : REPORT-COLLECTD-CSV-CPU-IDLE extraction&lt;BR /&gt;
extraction/transform: &lt;CODE&gt;REPORT-COLLECTD-CSV-CPU-IDLE&lt;/CODE&gt;&lt;/P&gt;

&lt;P&gt;collectd_csv_cpu_idle : REPORT-COLLECTD-CSV-CPU-NUMBER extraction&lt;BR /&gt;
extraction/transform: &lt;CODE&gt;TRANSFORM-COLLECTD-CSV-CPU-NUMBER&lt;/CODE&gt;&lt;/P&gt;

&lt;P&gt;The collectd file header was still getting through, so based on answer 586952 I've tried copying the Splunk instance's &lt;CODE&gt;/opt/splunk/etc/apps/search/local/props.conf&lt;/CODE&gt; and &lt;CODE&gt;transforms.conf&lt;/CODE&gt; to the universal forwarder's &lt;CODE&gt;/opt/splunkforwarder/etc/apps/_server_app_&amp;lt;server class&amp;gt;/local/&lt;/CODE&gt; Since collectd generates new files every day, we'll know tomorrow if this has gotten rid of the headers being read as data (event _raw string "epoch,value"). I can readily extend this pattern of sourcetypes, transformations and extractions to map the rest of the collectd data ( &lt;CODE&gt;df&lt;/CODE&gt; , &lt;CODE&gt;interface&lt;/CODE&gt; , &lt;CODE&gt;irq&lt;/CODE&gt; and so on).&lt;/P&gt;

&lt;P&gt;Now the problem is how to map this into the CIM. According to &lt;A href="http://docs.splunk.com/Documentation/AddOns/released/Linux/Configure2" target="_blank"&gt;http://docs.splunk.com/Documentation/AddOns/released/Linux/Configure2&lt;/A&gt; for instance, the Splunk Add-on for Linux expects the sourcetype &lt;CODE&gt;linux:collectd:http:json&lt;/CODE&gt; but this does not appear in my list of sourcetypes, so I can't even inspect it to know what's in it.&lt;/P&gt;</description>
      <pubDate>Tue, 29 Sep 2020 17:42:47 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Importing-collectd-csv-data-for-consumption-by-Splunk-TA-linux/m-p/342195#M41296</guid>
      <dc:creator>DUThibault</dc:creator>
      <dc:date>2020-09-29T17:42:47Z</dc:date>
    </item>
    <item>
      <title>Re: Importing collectd csv data for consumption by Splunk_TA_linux (3412)</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Importing-collectd-csv-data-for-consumption-by-Splunk-TA-linux/m-p/342196#M41297</link>
      <description>&lt;P&gt;Trying a different tack. I saw with &lt;A href="https://answers.splunk.com/answers/593409/transformsconf-wont-let-me-change-the-sourcetype.html" target="_blank"&gt;https://answers.splunk.com/answers/593409/transformsconf-wont-let-me-change-the-sourcetype.html&lt;/A&gt; that I can change the sourcetype of events, so I figured I would use the csv source (e.g. &lt;CODE&gt;.../cpu-*/cpu-nice-*&lt;/CODE&gt; ), transform its _raw data into &lt;CODE&gt;linux:collectd:graphite&lt;/CODE&gt; format, and switch its sourcetype from &lt;CODE&gt;collectd_csv_cpu_nice&lt;/CODE&gt; to &lt;CODE&gt;linux:collectd:graphite&lt;/CODE&gt;&lt;/P&gt;

&lt;P&gt;But it seems I failed, as the Splunk instance continues to receive just &lt;CODE&gt;collectd_csv_cpu_nice&lt;/CODE&gt; data in the untransformed format.&lt;/P&gt;

&lt;P&gt;To be clear, the csv files have lines like this:&lt;/P&gt;

&lt;P&gt;&amp;lt;unix_timestamp&amp;gt;,&amp;lt;cpu_nice_jiffies&amp;gt;&lt;/P&gt;

&lt;P&gt;whereas linux:collectd:graphite has lines like this:&lt;/P&gt;

&lt;P&gt;&amp;lt;host&amp;gt;.cpu-&amp;lt;cpu&amp;gt;.cpu-idle.value &amp;lt;cpu_nice_jiffies&amp;gt; &amp;lt;unix_timestamp&amp;gt;&lt;/P&gt;

&lt;P&gt;(Actually it expects percentages in floating point, but my old system (collectd 4.10) cannot supply that, only integer jiffy counts. I'm sure &lt;CODE&gt;Splunk_TA_linux&lt;/CODE&gt; won't mind...much.)&lt;/P&gt;

&lt;P&gt;So I added to props.conf and transform.conf on the Splunk instance and on the Forwarder:&lt;/P&gt;

&lt;P&gt;props.conf&lt;/P&gt;

&lt;P&gt;[collectd_csv_cpu_nice]&lt;BR /&gt;
DATETIME_CONFIG =&lt;BR /&gt;
HEADER_FIELD_LINE_NUMBER = 1&lt;BR /&gt;
INDEXED_EXTRACTIONS = csv&lt;BR /&gt;
NO_BINARY_CHECK = true&lt;BR /&gt;
SHOULD_LINEMERGE = false&lt;BR /&gt;
TIME_FORMAT = %s&lt;BR /&gt;
category = Metrics&lt;BR /&gt;
description = collectd CSV cpu-nice metric&lt;BR /&gt;
disabled = false&lt;BR /&gt;
pulldown_type = 1&lt;BR /&gt;
REPORT-COLLECTD-CSV-CPU-NUMBER = TRANSFORM-COLLECTD-CSV-CPU-NUMBER&lt;BR /&gt;
REPORT-COLLECTD-CSV-CPU-NICE = REPORT-COLLECTD-CSV-CPU-NICE&lt;BR /&gt;
REPORT-COLLECTD-CSV-CPU-NICE-PAYLOAD = TRANSFORM-COLLECTD-CSV-CPU-NICE-PAYLOAD&lt;BR /&gt;
REPORT-COLLECTD-CSV-CPU-NICE-SOURCETYPE = TRANSFORM-COLLECTD-CSV-CPU-NICE-SOURCETYPE&lt;/P&gt;

&lt;P&gt;transforms.conf&lt;/P&gt;

&lt;P&gt;&amp;amp;num; Extracts the CPU number from the source's enclosing directory name&lt;BR /&gt;
[TRANSFORM-COLLECTD-CSV-CPU-NUMBER]&lt;BR /&gt;
FORMAT = cpu::$1&lt;BR /&gt;
REGEX = ^.*/cpu-([0-9]+)/&lt;BR /&gt;
SOURCE_KEY = source&lt;/P&gt;

&lt;P&gt;&amp;amp;num; Overall input format&lt;BR /&gt;
[REPORT-COLLECTD-CSV-CPU-NICE]&lt;BR /&gt;
DELIMS = ","&lt;BR /&gt;
FIELDS = "unix_timestamp","cpu_nice_jiffies"&lt;/P&gt;

&lt;P&gt;&amp;amp;num; Rewrites the _raw line to conform to linux:collectd:graphite format&lt;BR /&gt;
[TRANSFORM-COLLECTD-CSV-CPU-NICE-PAYLOAD]&lt;BR /&gt;
REGEX = (.*?)&lt;BR /&gt;
FORMAT = _raw::$host.cpu-$cpu.cpu-idle.value $cpu_nice_jiffies $unix_timestamp&lt;/P&gt;

&lt;P&gt;&amp;amp;num; Changes the sourcetype to linux:collectd:graphite&lt;BR /&gt;
[TRANSFORM-COLLECTD-CSV-CPU-NICE-SOURCETYPE]&lt;BR /&gt;
DEST_KEY = MetaData:Sourcetype&lt;BR /&gt;
REGEX = (.*?)&lt;BR /&gt;
FORMAT = sourcetype::linux:collectd:graphite&lt;/P&gt;

&lt;P&gt;I probably wrote the TRANSFORM-COLLECTD-CSV-CPU-NICE-PAYLOAD FORMAT line all wrong. Help?&lt;/P&gt;</description>
      <pubDate>Tue, 29 Sep 2020 17:46:09 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Importing-collectd-csv-data-for-consumption-by-Splunk-TA-linux/m-p/342196#M41297</guid>
      <dc:creator>DUThibault</dc:creator>
      <dc:date>2020-09-29T17:46:09Z</dc:date>
    </item>
    <item>
      <title>Re: Importing collectd csv data for consumption by Splunk_TA_linux (3412)</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Importing-collectd-csv-data-for-consumption-by-Splunk-TA-linux/m-p/342197#M41298</link>
      <description>&lt;P&gt;See &lt;A href="https://answers.splunk.com/answers/615924/"&gt;https://answers.splunk.com/answers/615924/&lt;/A&gt; for the rest of the solution. At this point I can get the collectd csv data into Splunk as sourcetype &lt;CODE&gt;linux:collectd:graphite&lt;/CODE&gt;; my remaining problems have to do with collectd itself.&lt;/P&gt;</description>
      <pubDate>Mon, 05 Feb 2018 15:23:06 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Importing-collectd-csv-data-for-consumption-by-Splunk-TA-linux/m-p/342197#M41298</guid>
      <dc:creator>DUThibault</dc:creator>
      <dc:date>2018-02-05T15:23:06Z</dc:date>
    </item>
    <item>
      <title>Re: Importing collectd csv data for consumption by Splunk_TA_linux (3412)</title>
      <link>https://community.splunk.com/t5/All-Apps-and-Add-ons/Importing-collectd-csv-data-for-consumption-by-Splunk-TA-linux/m-p/342198#M41299</link>
      <description>&lt;P&gt;I consider this solved now. I use the &lt;CODE&gt;csv&lt;/CODE&gt; plugin to write the metrics to a local directory (cleaned by weekly &lt;CODE&gt;cron&lt;/CODE&gt; job that deletes older files), and i have a Splunk Universal Forwarder massage the events into the &lt;CODE&gt;linux:collectd:graphite&lt;/CODE&gt; format before sending them to the indexer/search head as such. Contact me for the details.&lt;/P&gt;</description>
      <pubDate>Wed, 07 Feb 2018 17:21:29 GMT</pubDate>
      <guid>https://community.splunk.com/t5/All-Apps-and-Add-ons/Importing-collectd-csv-data-for-consumption-by-Splunk-TA-linux/m-p/342198#M41299</guid>
      <dc:creator>DUThibault</dc:creator>
      <dc:date>2018-02-07T17:21:29Z</dc:date>
    </item>
  </channel>
</rss>

