All Apps and Add-ons

Can the Splunk on Splunk app query a heavy forwarder?

Splunk Employee
Splunk Employee

I have installed the Splunk on Splunk app on my search-head, from where it can reach my search peers. I would also like it to be able to look at one of my heavy forwarders. How can I achieve this?

1 Solution

Splunk Employee
Splunk Employee

As of SoS 2.2, please refer to this Splunk Answer to configure SoS to monitor the resource usage of a heavy forwarder.

There is one additional step, which is to ensure that your heavy forwarder is sending its _internal events to the indexers by adding the following two lines to $SPLUNK_HOME/etc/apps/sos/local/outputs.conf:

[tcpout]

forwardedindex.3.whitelist = _internal

*/!\ THE INFORMATION BELOW IS OUTDATED AS OF SOS 2.2 /!\*

The key fact to understand here is that, as explained in the SoS deployment best practices, "SoS can only analyze and report on Splunk instances that it can reach by means of distributed search."

So, to be able to have SoS query and report on a heavy forwarder, there are basically two options :

1 - Set up the HWF as a search peer of the search-head where SoS is installed. A word of warning about this : This means that all searches dispatched by the search-head will also be sent to the HWF. Normally, this shouldn't be too much of a problem (the HWF's indexes are normally empty, after all), but the functionality of the HWF could be affected if the search-head is running a lot of scheduled searches, real-time particularly as those create processes that run permanently.

2 - Install SoS on the HWF and set up selective indexing to keep all _internal events locally and forward the rest. This solution currently has my preference. If it's OK to run Splunkweb on the HWF and to keep it's _internal logs there (that index should never get that large as it is limited to 28 days), it's the least invasive.

To enable selective indexing for _internal only on the HWF, one needs to add the following configuration :

a) outputs.conf :

[indexAndForward]
index = true
# Indexing is turned on, but since "selectiveIndexing" is set in next line only specific events will be indexed.
selectiveIndexing = true
# Only index specific events that are tagged with _INDEX_AND_FORWARD_ROUTING key in inputs.conf.

b) inputs.conf :

[monitor://$SPLUNK_HOME/var/log/splunk]
_INDEX_AND_FORWARD_ROUTING = local
# Tag events from this data input to be indexed locally.

If you decide to enable any of the SoS-specific inputs, you'll have to do the same thing. Example for ps_sos.sh in $SPLUNK_HOME/etc/apps/sos/local/inputs/conf:

[script://./bin/ps_sos.sh]
disabled = 0
_INDEX_AND_FORWARD_ROUTING = local

In a future version of SoS (probably 2.2, no promises yet) the "Server to query" top-level search control will be driven by a dynamic lookup to which you'll be able to add your own hosts manually. Even if those hosts (secondary search-heads and forwarders typically) are forwarding their _internal and _audit logs to the indexers, as long as those events can be queried by means of distributed search from the instance where SoS is installed, the vast majority of SoS views will be able to work for those hosts. The exception will be for views that use custom search commands that need to be executed on the actual remote host (Configuration Viewer, for example) by means of distributed search.

View solution in original post

Splunk Employee
Splunk Employee

As of SoS 2.2, please refer to this Splunk Answer to configure SoS to monitor the resource usage of a heavy forwarder.

There is one additional step, which is to ensure that your heavy forwarder is sending its _internal events to the indexers by adding the following two lines to $SPLUNK_HOME/etc/apps/sos/local/outputs.conf:

[tcpout]

forwardedindex.3.whitelist = _internal

*/!\ THE INFORMATION BELOW IS OUTDATED AS OF SOS 2.2 /!\*

The key fact to understand here is that, as explained in the SoS deployment best practices, "SoS can only analyze and report on Splunk instances that it can reach by means of distributed search."

So, to be able to have SoS query and report on a heavy forwarder, there are basically two options :

1 - Set up the HWF as a search peer of the search-head where SoS is installed. A word of warning about this : This means that all searches dispatched by the search-head will also be sent to the HWF. Normally, this shouldn't be too much of a problem (the HWF's indexes are normally empty, after all), but the functionality of the HWF could be affected if the search-head is running a lot of scheduled searches, real-time particularly as those create processes that run permanently.

2 - Install SoS on the HWF and set up selective indexing to keep all _internal events locally and forward the rest. This solution currently has my preference. If it's OK to run Splunkweb on the HWF and to keep it's _internal logs there (that index should never get that large as it is limited to 28 days), it's the least invasive.

To enable selective indexing for _internal only on the HWF, one needs to add the following configuration :

a) outputs.conf :

[indexAndForward]
index = true
# Indexing is turned on, but since "selectiveIndexing" is set in next line only specific events will be indexed.
selectiveIndexing = true
# Only index specific events that are tagged with _INDEX_AND_FORWARD_ROUTING key in inputs.conf.

b) inputs.conf :

[monitor://$SPLUNK_HOME/var/log/splunk]
_INDEX_AND_FORWARD_ROUTING = local
# Tag events from this data input to be indexed locally.

If you decide to enable any of the SoS-specific inputs, you'll have to do the same thing. Example for ps_sos.sh in $SPLUNK_HOME/etc/apps/sos/local/inputs/conf:

[script://./bin/ps_sos.sh]
disabled = 0
_INDEX_AND_FORWARD_ROUTING = local

In a future version of SoS (probably 2.2, no promises yet) the "Server to query" top-level search control will be driven by a dynamic lookup to which you'll be able to add your own hosts manually. Even if those hosts (secondary search-heads and forwarders typically) are forwarding their _internal and _audit logs to the indexers, as long as those events can be queried by means of distributed search from the instance where SoS is installed, the vast majority of SoS views will be able to work for those hosts. The exception will be for views that use custom search commands that need to be executed on the actual remote host (Configuration Viewer, for example) by means of distributed search.

View solution in original post

Splunk Employee
Splunk Employee

@dabbank : I assume that you went with the solution to install SoS on the HWF itself and to set up local indexing for the ps_sos.sh scripted input. Since the product of this data input is counted against your licensed quota, you're going to need to set up your HWF as a license slave or have a small license installed on it to avoid this problem.

Path Finder

To use SoS on a HFW works except the "Resource Usage" part. I always get the following license error instead of the expected results:
400 - Error in 'litsearch' command: Your Splunk licenses expired ...

0 Karma