All Apps and Add-ons

Using a splunk add-on for infrastucture for a working universal forwarder and enterprise

juliennerocafor
New Member

Hello, I'm new with Splunk and still exploring how to use it. I was able to successfully create a Splunk Enterprise and Splunk Universal on two separate linux virtual machines. Now, my goal is to create monitoring metrics for cpu usage, etc. I've installed an App for Infrastructure and an add-on for infrastructure in the Splunk Enterprise VM. When adding entities, I can't install the generated linux command since I have restrictions for firewalls and kaspersky and etc. so I just followed this: https://answers.splunk.com/answers/706010/in-the-splunk-app-for-infrastructure-can-you-use-e.html.

Instead of doing the windows version guide, I followed the one in Linux (https://docs.splunk.com/Documentation/InfraApp/1.2.2/Admin/ManageAgents. I've also added an inputs.conf and outputs.conf in my etc/apps/search/local of my splunk forwarder directory. Although when I restart my UF, there are still no entities in my Splunk Enterprise App. Can you help me with this? Thank you in advance!

Inputs.conf

[perfmon://CPU Load]
counters = % C1 Time;% C2 Time;% Idle Time;% Processor Time;% User Time;% Privileged Time;% Reserved Time;% Interrupt Time
instances = *
interval = 30
object = Processor
index = em_metrics
_meta = os::"Linux"

[perfmon://Physical Disk]
counters = % Disk Read Time;% Disk Write Time
instances = *
interval = 30
object = PhysicalDisk
index = em_metrics
_meta = os::"Linux"

[perfmon://Network Interface]
counters = Bytes Received/sec;Bytes Sent/sec;Packets Received/sec;Packets Sent/sec;Packets Received Errors;Packets Outbound Errors
instances = *
interval = 30
object = Network Interface
index = em_metrics
_meta = os::"Linux"

[perfmon://Available Memory]
counters = Cache Bytes;% Committed Bytes In Use;Page Reads/sec;Pages Input/sec;Pages Output/sec;Committed Bytes;Available Bytes
interval = 30
object = Memory
index = em_metrics
_meta = os::"Linux"

[perfmon://System]
counters = Processor Queue Length;Threads
instances = *
interval = 30
object = System
index = em_metrics
_meta = os::"Linux"

[perfmon://Process]
counters = % Processor Time;% User Time;% Privileged Time
instances = *
interval = 30
object = Process
index = em_metrics
_meta = os::"Linux"

[perfmon://Free Disk Space]
counters = Free Megabytes;% Free Space
instances = *
interval = 30
object = LogicalDisk
index = em_metrics
_meta = os::"Linux"

monitor:///var/log/syslog]
disabled = false
sourcetype = syslog

[monitor:///var/log/daemon.log]
disabled = false
sourcetype = syslog

[monitor:///var/log/auth.log]
disabled = false
sourcetype = syslog

[monitor:///var/log/apache/access.log]
disabled = false
sourcetype = combined_access

[monitor:///var/log/apache/error.log]
disabled = false
sourcetype = combined_access

[monitor:///opt/splunkforwarder/var/log/splunk/*.log]
disabled = false
index = _internal

[monitor:///etc/collectd/collectd.log]
disabled = false
index = _internal

Outputs.conf

[tcpout]
defaultGroup = splunk-app-infra-autolb-group

[tcpout:splunk-app-infra-autolb-group]
disabled = false
server = 192.168.56.110:9997

collectd.conf

#
# Config file for collectd(1).
# Please read collectd.conf(5) for a list of options.
# http://collectd.org/
#

##############################################################################
# Global                                                                     
#
#----------------------------------------------------------------------------#
# Global settings for the daemon.                                            
#
##############################################################################

Hostname    "192.168.56.109"
#FQDNLookup   true
#BaseDir     "/var/lib/collectd"
#PIDFile     "/var/run/collectd.pid"
#PluginDir   "/usr/lib64/collectd"
#TypesDB     "/usr/share/collectd/types.db"

#----------------------------------------------------------------------------#
# When enabled, plugins are loaded automatically with the default options    #
# when an appropriate <Plugin ...> block is encountered.                     
#
# Disabled by default.                                                       
#
#----------------------------------------------------------------------------#
#AutoLoadPlugin false

#----------------------------------------------------------------------------#
# When enabled, internal statistics are collected, using "collectd" as the   #
# plugin name.                                                               
#
# Disabled by default.                                                      
#
#----------------------------------------------------------------------------#
#CollectInternalStats false

#----------------------------------------------------------------------------#
# Interval at which to query values. This may be overwritten on a per-plugin #
# base by using the 'Interval' option of the LoadPlugin block:               
#
#   <LoadPlugin foo>                                                        
#
#       Interval 60                                                          
#
#   </LoadPlugin>                                                            
#
#----------------------------------------------------------------------------#
Interval     60

#MaxReadInterval 86400
#Timeout         2
#ReadThreads     5
#WriteThreads    5

# Limit the size of the write queue. Default is no limit. Setting up a limit is
# recommended for servers handling a high volume of traffic.
#WriteQueueLimitHigh 1000000
#WriteQueueLimitLow   800000

##############################################################################
# Logging                                                                    
#
#----------------------------------------------------------------------------#
# Plugins which provide logging functions should be loaded first, so log     #
# messages generated when loading or configuring other plugins can be        #
# accessed.                                                                 
#
##############################################################################

LoadPlugin syslog
LoadPlugin logfile
<LoadPlugin "write_splunk">
        FlushInterval 10
</LoadPlugin>

##############################################################################
# LoadPlugin section                                                        
#
#----------------------------------------------------------------------------#
# Lines beginning with a single `#' belong to plugins which have been built  #
# but are disabled by default.                                               
#
#                                                                            
#
# Lines beginning with `##' belong to plugins which have not been built due  #
# to missing dependencies or because they have been deactivated explicitly.  #
##############################################################################

#LoadPlugin csv
LoadPlugin cpu
LoadPlugin memory
LoadPlugin df
LoadPlugin load
LoadPlugin disk
LoadPlugin interface

##############################################################################
# Plugin configuration                                                       
#
#----------------------------------------------------------------------------#
# In this section configuration stubs for each plugin are provided. A desc-  #
# ription of those options is available in the collectd.conf(5) manual page. #
##############################################################################

<Plugin logfile>
    LogLevel info
    File "/etc/collectd/collectd.log"
    Timestamp true
    PrintSeverity true
</Plugin>

<Plugin syslog>
    LogLevel info
</Plugin>

<Plugin cpu>
    ReportByCpu false
    ReportByState true
    ValuesPercentage true
</Plugin>

<Plugin memory>
    ValuesAbsolute false
    ValuesPercentage true
</Plugin>

<Plugin df>
    FSType "ext2"
    FSType "ext3"
    FSType "ext4"
    FSType "XFS"
    FSType "rootfs"
    FSType "overlay"
    FSType "hfs"
    FSType "apfs"
    FSType "zfs"
    FSType "ufs"
    ReportByDevice true
    ValuesAbsolute false
    ValuesPercentage true
    IgnoreSelected false
</Plugin>

<Plugin load>
    ReportRelative true
</Plugin>

<Plugin disk>
    Disk ""
    IgnoreSelected true
    UdevNameAttr "DEVNAME"
</Plugin>

<Plugin interface>
    IgnoreSelected true
</Plugin>

<Plugin write_splunk>
           server "192.168.56.110"
           port "8088"
           token "SomeGUIDToken"
           ssl true
           verifyssl false
           owner:admin
</Plugin>

#Update Hostname, <HEC SERVER> & <splunk app server> in collectd.conf file above. Also, you can add dimensions as <Dimension "key:value">  to write_splunk plugin (optional)" 
0 Karma

dagarwal_splunk
Splunk Employee
Splunk Employee

hi,

You are mixing up windows and linux data collection.

"perfmon" inputs in UF is only for Windows metrics.
You need to have "collectd" installed for Linux metrics. Splunk UF only forwards logs for Linux machines. What version of collectd do you have?

Also, you don't need SAI add-on on UF.

0 Karma

vliggio
Communicator

What do you mean by "Splunk UF only forwards logs for Linux machines"? The UF on Windows certainly collects logs.

0 Karma

dagarwal_splunk
Splunk Employee
Splunk Employee

There are two types of data collected: Logs & Metrics.

Windows : Logs --> Splunk UF
Windows : Metrics -->Splunk UF (perfmon inputs.conf)

Linux: Logs --> Splunk UF (same as Windows)
Linux: Metrics --> Collectd

Metrics is required for entity discovery in SAI.

0 Karma

vliggio
Communicator

Ah, that was not clear what you meant.

In this case, you are correct for the app in question, so a metrics index is being used and they're sent using collectd to an http event collector (so that would be Metrics -> Collectd -> HEC). But if someone is still using the Splunk Add-on for Unix and Linux, then the UF collects metrics via scripted inputs (non-Linux systems and older versions of Splunk that don't have metrics index capabilities).

For the OP, note that collectd would have to be installed on any older versions of Linux to collect metrics where it doesn't come natively installed.

0 Karma

dagarwal_splunk
Splunk Employee
Splunk Employee

If you are using Splunk Add-on for Linux and Unix, it has to be converted to metrics format using props.conf and transforms.conf and then indexed to Splunk's metrics index.

0 Karma

dagarwal_splunk
Splunk Employee
Splunk Employee

I meant Splunk UF do not forward metrics for linux machines. It does forward both logs and metrics for windows.

0 Karma

dagarwal_splunk
Splunk Employee
Splunk Employee

Where did you put your collectd.conf file that you mentioned ..?

0 Karma

vliggio
Communicator

The App for Infrastructure goes on the indexers, and the Add-On for Infrastructure goes on both the indexers and the UF's.

The most important command for debugging Splunk is btool. Learn it early and it will be your friend. Since Splunk combines many different config files together, btool allows you to see what Splunk is actually using for its final config. Try this:

/opt/splunkforwarder/bin/splunk btool inputs list --debug

That will show your actual inputs configuration (on a universal forwarder on a Linux box - substitute the application location as necessary on indexers and if you're using Windows). Unfortunately the configs you posted here don't mean anything because Splunk might be getting configs from other directories which override your settings. Play with the command a bit and you'll see (also read up on Splunk config file precedence here - https://docs.splunk.com/Documentation/Splunk/8.0.3/Admin/Wheretofindtheconfigurationfiles)

You do not want to put your inputs.conf into your search app directory. It'll get very confusing very fast. You should have the add-on directory in your splunkforwarder/etc/apps directory and inside the add-on directory you'll see a default directory with an inputs.conf file. Create a local directory in the same folder that the default directory is, and copy the inputs.conf from the default into the local directory, and edit it.

0 Karma

juliennerocafor
New Member

Hello @vliggio ,
I've used the command that you gave me and it showed me the host that I'm actually accessing. It showed me the hostname instead of the ip. When I tried to ping it, I don't get any response at all.. So I just changed it to the host ip address since I can get a response from it.

Also, I added an inputs.conf file in my local directory of the add-on. Although there's no existing input.conf file in the default directory. Is that okay?

Regards,
Rockie

0 Karma

vliggio
Communicator

Yes. Splunk combines all the files in all app directories (following the precedence rules I linked you to). That's why btool is so important - you can put multiple inputs.conf files in multiple places and could have conflicting settings, and Splunk has specific rules to determine which one it uses. You can use btool to look at any Splunk configuration - just substitute the config file name (ie, ouputs, inputs, indexes, etc).

As for this App/Add-On combo (I haven't installed this specific release), I agree with gcusello - look at the documentation. It's not like most Splunk apps which have inputs.conf. Read the following page on how to enable date inputs:

https://docs.splunk.com/Documentation/InfraApp/2.0.3/Admin/AddData

Also, one minor correction, the Add-On should also be installed on the indexers (in conjunction with the App) - both are needed for the App to function correctly.

0 Karma

juliennerocafor
New Member

Hello vliggio,
Apologies but I just read your conversation with gcusello. I am now manually installing the add-ons instead of just downloading it from the SE. Download links:

https://splunkbase.splunk.com/app/3975/
https://splunkbase.splunk.com/app/4217/

I am then now following this: https://docs.splunk.com/Documentation/InfraApp/2.0.3/Admin/ManualInstalLinuxUF. Although it does not say anything about the collectid.conf file. In addition to that, should I create a separate UF for this installation or the existing working UF is enough? When I installed the add-on on the existing UF, the folder 'splunk_app_infra_uf_config' is missing.

Regards,
Rockie

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @juliennerocafort,
To debug your situation, start trying this:

  • At first, did you enabled your Splunk Enterprise (SE) to receive logs from Universal Forwarder (UF) on port 9997?
  • then, check the connection using telnet 192.168.56.110 9997 that I suppose to be the address of the SE;
  • then check if you're receiving the Splunk UF's internal logs: running the search index=_internal host=UF_hostname ;
  • then, did you installed the Add-On on the UF?

Ciao.
Giuseppe

0 Karma

juliennerocafor
New Member

Hello, gcusello.

  • Yes, I was already receiving forwarded logs from port 9997 even before I install the add-ons.
  • I wasn't able to use the telnet command, although I can ping the SE from the UF.
  • There were results showing on my SE search and filter when I run the command.
  • I just installed the "Splunk Add-on for Infrastructure" in the SE using the 'Browse more apps' option on the homepage. Is it a different installation on the UF?

Regards,
Rockie

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @juliennerocafort,
if the telnet command is installed on your UF, you can launch the command telnet 192.168.56.110 9997 and check if it's open the route between UF and SE.
Anyway, if the search I suggested have results, it means that the connection between UF and SE is established.
As described in the documentation, the Add-on must be installed both on UF and SE.
You can install, the Add-on untarring it in $SPLUNK_HOME/etc/apps and restarting Splunk.
Check, after untar and before restart, that in all inputs.conf's stanzas there's disabled=0.

Ciao.
Giuseppe

0 Karma

juliennerocafor
New Member

Hello @gcusello,

I just installed the add-on on my UF, verified that the inputs.conf includes disabled=false. Although when I restart my SE, there are still no added entities. What else should I check? Should I also install the app for infrastructure in my UF?

Thanks,
Rockie

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @juliennerocafort,
No, Apps must be installed on SE, Add-Ons on UFs and sometimes on SE.
Did you restarted Splunk on UF after TA's installation?
Every time you modify something on a system (also UFs) by configuration files (TA installation is one of these cases), you have to restart Splunk on that system not on SE.

Ciao.
Giuseppe

0 Karma

juliennerocafor
New Member

Hello, @gcusello,
Yes, I have restarted it already. Although, there's still no entities connected.

Regards,
Rockie

0 Karma

gcusello
SplunkTrust
SplunkTrust

which is the user running Splunk on UF? if it isn't root, check grants.
Why did you shared collectd.conf? if you're using UF you don't need it.

Ciao.
Giuseppe

0 Karma

juliennerocafor
New Member

I'm using root as the user. Oh, I'll just delete it then. I'll try to re-install a new UF on my local and just add the inputs.conf and outputs.conf.

0 Karma

juliennerocafor
New Member

Hello @gcusello ,
I've just reinstalled a new forwarder and a new enterprise.

  • Again, I was able to receive the logs from the UF.
  • I was also able to get results with the command that you told me awhile ago: ndex=_internal host=UF_hostname.
  • I've installed the App for Infrastructure on SE and installed the add-on on both the SE and UF.
  • I also restarted it after installation.
  • After that, I've updated the inputs.conf file in my UF's /apps/search/local directory. I also put the outputs.conf file there.

Am I putting the conf files in the proper directory? or should they be in SE's /apps/splunk_app_infrastructure/local directory?

Regards,
Rockie

0 Karma
Get Updates on the Splunk Community!

New Case Study Shows the Value of Partnering with Splunk Academic Alliance

The University of Nevada, Las Vegas (UNLV) is another premier research institution helping to shape the next ...

How to Monitor Google Kubernetes Engine (GKE)

We’ve looked at how to integrate Kubernetes environments with Splunk Observability Cloud, but what about ...

Index This | How can you make 45 using only 4?

October 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...