I need some help getting RRD data entered into splunk.
Data Example:
1340655780: 9.2189559556e+05 2.6145535333e+06
1340658000: 9.8897729778e+05 2.3643315422e+06
1340660220: 6.0333832000e+05 2.3522330178e+06
1340662440: 1.5102271111e+06 2.6492235911e+06
The first column is epoch time, second is network octets out, third is network octets in.
Whats the best way for getting this data into splunk and graphing it?
Thanks in advance for your help.
Another approach, particularly when collectd
is monitoring a remote system on which a Splunk Universal Forwarder is installed, would be to select the CSV
output plugin, and then have the forwarder monitor the selected DataDir
,Another approach, particularly if collectd
is monitoring a remote system with a Splunk Universal Forwarder on it, would be to select the CSV
plugin for collectd
output. The csv output directory ( DataDir
setting) would then be monitored by the forwarder.
I'm facing a unique issue with collectd 5.8.1 on CentOS 7.6. Some of the collectd metrics fields in the JSON have no values and/or have no field/attributes names. This causing Splunk to reject all events and not ingest at all , citing errors like below.
search peer idx-xyzmydomain.com has the following message: Metric value=
I've spent time trying to make head or tail of it but need alternate way to ingest metrics but parsing selected fields from collectd generated csv file data. @DUThibault , could you please share the details of your implementation ?
@smitra_splunk We faced a number of constraints that did not allow use of JSON as a transmission format; the older collectd we used also limited the plug-ins we could use, which meant a few data streams would be missing from those expected by the Splunk Add-on for Linux. This second constraint is of course not a problem if you're doing your own analysis of the data streams. We were also unable to use collectd's write_graphite plug-in. We ended up using collectd's write_csv to "log" the data locally, combined with a Universal Forwarder that processed the logs and sent their events in simulated linux:collectd:graphite sourcetype.
The Universal Forwarder uses a network connection to send its data, very much like write_http does, but offers several advantages despite its light footprint: it can tag metadata; it buffers, compresses and secures the data transfers; it can consolidate data; it can handle index-time transformations; and it can even do load balancing (when its data are being consumed by several Splunk indexers).
Now, your problem seems to be that collectd is sending empty JSON fields, so my first thought would be to check the collectd configuration. The transmission mode (HEC vs. http vs. TCP vs. UDP) is extremely unlikely to be at fault here. Which collectd plug-ins are you using?
Your best bet is to utilize either rrddump (http://oss.oetiker.ch/rrdtool/doc/rrddump.en.html) or rrdxport (http://oss.oetiker.ch/rrdtool/doc/rrdxport.en.html) to write the data out to a text file (XML), which Splunk can then easily monitor and ingest.
What would my source type be for this xml feed?