All Apps and Add-ons

In the Splunk App for Infrastructure, can you use existing universal forwarders without running the script and reinstalling on all of our servers?

omprakash9998
Path Finder

Hi,

We have a Splunk environment with universal forwarders already installed on our Windows servers. We want to try the Splunk App for infrastructure. Can we use the existing Universal Forwarders to use the Splunk App for Infrastructure without having to running the script and reinstalling on all of our servers.

Thanks.

1 Solution

dagarwal_splunk
Splunk Employee
Splunk Employee

You need to setup inputs.conf (add all metrics and logs data to collect) and outputs.conf (send data to SAI instance) on existing Splunk Universal Forwarders.

Here is useful link:
http://docs.splunk.com/Documentation/InfraApp/1.2.2/Admin/ManualInstallWindowsUF

Something similar to this :
https://answers.splunk.com/answers/699711/can-you-help-me-use-the-splunk-app-for-infrastruct.html#an...

View solution in original post

qhmassc
Explorer

Here is the curl command result:

{"text":"Server is busy","code":9,"invalid-event-number":0}

0 Karma

qhmassc
Explorer

I didn't see any indexing error like " metric event not indexed". But do see following Parsing error:
01-07-2019 17:18:04.592 -0500 ERROR HttpInputDataHandler - Parsing error : Server is busy

0 Karma

dagarwal_splunk
Splunk Employee
Splunk Employee
0 Karma

qhmassc
Explorer

Thanks, this fixed the issue!

Thanks again!

0 Karma

dagarwal_splunk
Splunk Employee
Splunk Employee

Great! you can add your dimensions back now if you want..

0 Karma

dagarwal_splunk
Splunk Employee
Splunk Employee

It seems you put inside Plugin write_splunk -
Dimension "entity_type:Linux_Host"

0 Karma

dagarwal_splunk
Splunk Employee
Splunk Employee

Formatting got messed up when you pasted collectd.conf file here.

0 Karma

qhmassc
Explorer

Check collectd again:
1. collectd service is running.
Yes
2. No errors in collectd.log file
No
3. Plugin write_splunk is there in collectd.conf file.

       server "ServerIpAddress"
       port "8088"
       token "tokenValve_Not_the name of the token"
       ssl true
       verifyssl false

Dimension "entity_type:Linux_Host"

  1. The server (SAI server), token, port are correct for write_splunk in collectd.conf file Yes Check SAI again:
  2. Add-on for Infra installed Yes
  3. Token used by collectd is enabled and use em_metrics for sourcetype and index. Yes
0 Karma

qhmassc
Explorer

here is Customization for Splunk:

       server "100.111.111.111"
       port "8088"
       token "xxxxxxxxxxxxxxxxxxxxxxxxxx"
       ssl true
       verifyssl false
0 Karma

dagarwal_splunk
Splunk Employee
Splunk Employee

What about the log file for collectd ? Also, What OS version you are using?

0 Karma

qhmassc
Explorer

Linux version: 2.6.32-642.15.1.el6.x86_64

0 Karma

dagarwal_splunk
Splunk Employee
Splunk Employee

Try this search and see if it returns anything:

| mstats avg(_value) WHERE index=em_metrics AND metric_name=cpu.* by host

0 Karma

dagarwal_splunk
Splunk Employee
Splunk Employee

Also, what is the sourcetype and index for your hec_token? Check hec_token in your Splunk instance that is used by collectd.

Make sure both are "em_metrics".

0 Karma

dagarwal_splunk
Splunk Employee
Splunk Employee

https://docs.splunk.com/Documentation/InfraApp/1.2.2/Install/Install

See step 5. You can use the Splunk Web. What token are you using right now in collectd.conf file?

0 Karma

qhmassc
Explorer

I missed SourceType, set to em_metrics now. should I restart the service?

0 Karma

qhmassc
Explorer

only one token was configured with splunk server.

0 Karma

dagarwal_splunk
Splunk Employee
Splunk Employee

It should work now. You can try the search again?

0 Karma

dagarwal_splunk
Splunk Employee
Splunk Employee

might need to restart

0 Karma

qhmassc
Explorer

I disabled old token, created a new token, restart the server. Then updated the token to the newly created one with agent collectd.conf, then restarted collectd ( no error) and uf, but still nothing returned when I did the search.

0 Karma

dagarwal_splunk
Splunk Employee
Splunk Employee

Let's try to debug:
Check collectd again:
1. collectd service is running.
2. No errors in collectd.log file
3. Plugin write_splunk is there in collectd.conf file.
4. The server (SAI server), token, port are correct for write_splunk in collectd.conf file

Check SAI again:
1. Add-on for Infra installed
2. Token used by collectd is enabled and use em_metrics for sourcetype and index.

0 Karma

qhmassc
Explorer

Config file for collectd(1).

Please read collectd.conf(5) for a list of options.

http://collectd.org/

Global

----------------------------------------------------------------------------

Global settings for the daemon.

Hostname "xxxxxxx"
FQDNLookup false

BaseDir "/var/lib/collectd"

PIDFile "/var/run/collectd.pid"

PluginDir "/usr/lib64/collectd"

TypesDB "/usr/share/collectd/types.db"

----------------------------------------------------------------------------

When enabled, plugins are loaded automatically with the default options

when an appropriate block is encountered.

Disabled by default.

----------------------------------------------------------------------------

AutoLoadPlugin false

----------------------------------------------------------------------------

When enabled, internal statistics are collected, using "collectd" as the

plugin name.

Disabled by default.

----------------------------------------------------------------------------

CollectInternalStats false

----------------------------------------------------------------------------

Interval at which to query values. This may be overwritten on a per-plugin

base by using the 'Interval' option of the LoadPlugin block:

Interval 60

----------------------------------------------------------------------------

Interval 60

MaxReadInterval 86400

Timeout 2

ReadThreads 5

WriteThreads 5

Limit the size of the write queue. Default is no limit. Setting up a limit is

recommended for servers handling a high volume of traffic.

WriteQueueLimitHigh 1000000
WriteQueueLimitLow 800000

Logging

----------------------------------------------------------------------------

Plugins which provide logging functions should be loaded first, so log

messages generated when loading or configuring other plugins can be

accessed.

LoadPlugin syslog
LoadPlugin logfile

    FlushInterval 30

LoadPlugin section

----------------------------------------------------------------------------

Lines beginning with a single `#' belong to plugins which have been built

but are disabled by default.

Lines beginning with `##' belong to plugins which have not been built due

to missing dependencies or because they have been deactivated explicitly.

LoadPlugin csv

LoadPlugin cpu
LoadPlugin memory
LoadPlugin df
LoadPlugin load
LoadPlugin disk
LoadPlugin interface

Plugin configuration

----------------------------------------------------------------------------

In this section configuration stubs for each plugin are provided. A desc-

ription of those options is available in the collectd.conf(5) manual page.

LogLevel info

LogLevel notice

File "/etc/collectd/collectd.log"
Timestamp true
PrintSeverity true



LogLevel info



ReportByCpu true
ReportByState true
ValuesPercentage true



ValuesAbsolute false
ValuesPercentage true



FSType "ext2"
FSType "ext3"
FSType "ext4"
FSType "XFS"
FSType "rootfs"
FSType "overlay"
FSType "hfs"
ReportByDevice true
ValuesAbsolute false
ValuesPercentage true
IgnoreSelected false



ReportRelative true



Disk ""
IgnoreSelected true
UdevNameAttr "DEVNAME"



IgnoreSelected true

Customization for Splunk

----------------------------------------------------------------------------

This plugin sends all metrics data from other plugins to Splunk via HEC.

       server "1111111111111111111"
       port "8088"
       token "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
       ssl true
       verifyssl false

Dimension "entity_type:Linux_Host"

0 Karma