All Apps and Add-ons

Best practices to deploy the S.o.S app in a distributed search environment

hexx
Splunk Employee
Splunk Employee

I have a Splunk deployment with one search-head and multiple indexers and I would like to install the Splunk on Splunk app to analyze and troubleshoot problems on these instances. Were should I install the app? Are there other components that need to be installed and configured?

1 Solution

hexx
Splunk Employee
Splunk Employee

Starting with version 2.0, the Splunk on Splunk app (a.k.a SoS) leverages distributed search to allow introspective analysis and troubleshooting on your deployment from a centralized location.

Let's cover the details of deploying the SoS app, F.A.Q-style:

- Where should SoS be installed?

The SoS app should be installed on your search-head.

- What if I have more than one search-head?

SoS can only analyze and report on Splunk instances that it can reach by means of distributed search. Unless you have a federating search-head for which the other search-heads are declared as search-peers, you should install SoS on each search-head that you wish to gain introspection on.

- What about search-head pooling? Any extra steps I need to take to install SoS on a search-head pool?

You will almost certainly need to manually restart Splunk on the pool members other than the one where SoS was installed from the user interface in order for all pool members to become aware of the app.

- Are there any requirements for SoS to work?

The SoS app depends on the UI module library of the Sideview Utils app. Make sure you install SVU (minimum version required : 1.17) prior to SoS.

- Does anything need to be installed on the search peers?

Optionally, you can install the SoS technology add-on on your Unix and Linux search peers. This add-on provides scripted inputs that track the resource usage (CPU, memory, file descriptors) of Splunk.

- Do I also need to install the SoS technology add-on on the search-head?

No, that would be redundant. The SoS app ships with the same data inputs than the technology add-on.

- What do I need to do for the SoS data inputs to track Splunk resource usage?

By default, the data inputs for SoS and SoS-TA are disabled. They must be enabled in the UI (look for ps_sos.sh and lsof_sos.sh in Manager > Data Inputs > Scripts) or from the command line (refer to the README file for more information on this) before they can report resource usage information.

- What if my search-head is configured to forward data back to the search peers? Any action required for the SoS app to work in that scenario?

If the search-head where the SoS app is installed forwards its own events to the indexers it queries (typically, to spray summarized events back to the indexers) you have to make sure that events for the _internal index are also forwarded. This is not the case by default due to the forwardedindex filters in outputs.conf, which prevents _internal events from being forwarded. To correct this, add the following configuration to $SPLUNK_HOME/etc/apps/sos/local/outputs.conf:

[tcpout]
forwardedindex.3.whitelist = _internal

- What about my heavy forwarder? Can SoS query and report on it?

Yes. Please see this Splunk Answer for details on how to achieve this.

- How can I make sure that SoS reports accurate license usage information in the Metrics view?

1. Make sure to install SoS on the search-head that acts as a license master, or at least that the license master's _internal index can be accessed by distributed search.

2. [Splunk 4.2.x only] Make sure to enable the scheduled search named sos_summary_volume_daily in Manager > Searches and Reports. This search populates the sos_summary_daily summary index, aggregating your license usage on a daily basis.

3. [Splunk 4.2.x only] Backfill the summary index "sos_summary_daily" by running the following command:

$SPLUNK_HOME/bin/splunk cmd python $SPLUNK_HOME/bin/fill_summary_index.py -app sos -name sos_summary_volume_daily -et -28d -lt -1d -j 10 -owner admin

Note: You do not need to perform steps #2 and #3 if you are running Splunk 4.0.x/4.1.x or 4.3.x.

View solution in original post

theunf
Communicator

Some questions and my current way to deploy S.o.S. ...

1st) forwarding _internal logs from SH do IDX Will eat part of the license ?
2nd) as S.o.S. have it´s own indexes with summarizations and other stuff, the
_TCP_ROUTING on each default input plus the indexandforwarding=false Will
use sos index only at the INDEXERS, right ?

My current way to deploy it is to keep Master Cluster as a management point but still
using your SH feature only appling the full APP there and the TA on all others.
Plus, I add the SHs as distributed peers only at the Master.

This way the APP can see all instances, and all indexing stays local.
The negative is that it Will treat them as Indexers;(
Tried to change the csv but it turns back to indexer ;(

0 Karma

fwilmot
Splunk Employee
Splunk Employee

In a recent example, we also installed the S.o.S TA on a heavy forwarder as a means to understand network throughput and ingestion rate. We made the forwarder a search peer of the search head which had the S.o.S application installed, and it worked very well. We were not indexing and forwarding, and we also did not attempt with a universal forwarder. Give it a shot if you need to troubleshoot a specific forwarder.

hexx
Splunk Employee
Splunk Employee

Glad to hear of more advanced and creative deployments of SoS, thanks for sharing Fred!

In the future, we plan to offer alternative ways to drive the "Server to query" search control that most of the app views, in order to accommodate deployments where the _internal index contains events for more hosts than just the search-head and the indexers.

Typically, this will involve a self-maintaining lookup that you'll be able to manually add hosts to. As an added benefit, the pulldown will populate much faster.

The target release for this feature is SoS 2.2 as of right now.

0 Karma

mikehale0
Explorer

I'm running splunk 4.3. Here are the searcher and indexer configs I came up with plus a query to verify everything is working:

searcher inputs.conf:

[monitor:///opt/splunk/var/log/splunk]
_TCP_ROUTING = indexers
index = _internal

searcher outputs.conf:

[tcpout]
forwardedindex.filter.disable = true
defaultGroup = indexers
disabled=false

[tcpout:indexers]
server = x.x.x.x:9997

indexer inputs.conf:

[splunktcp://9997]

[monitor:///opt/splunk/var/log/splunk]
_TCP_ROUTING = *
index=_internal

query to verify (should include all active searchers and indexers):

earliest=-1m index=_internal NOT sourcetype= "searches" NOT sourcetype= "splunk_intentions"|dedup host|table host, time

mikehale0
Explorer

Makes sense. Updating my post...

0 Karma

hexx
Splunk Employee
Splunk Employee

Note that since you have set forwardedindex.filter.disable = true, all forwardedindex directives are ignored, which is actually what you'd want in this scenario. For that reason, the following configuration lines are superfluous :
forwardedindex.0.whitelist = _internal
forwardedindex.1.whitelist = _audit

Likewise, indexAndForward = true seems unnecessary if you only want to keep your data in one location (the search-head or the indexers). I would recommend removing that directive and letting the search-head forward everything back to the indexers.

0 Karma

Damien_Dallimor
Ultra Champion

I have the SoS app installed on my search head pool.

I also have the search head splunk server logs ( from var/log/splunk that get indexed into _internal) being forwarded into the indexer cluster.

Sideview Utils is required.

All working great, awesome insights to be had.

hexx
Splunk Employee
Splunk Employee

Sounds about right 🙂 Let us know if things don't work out. If that's the case, perhaps it would be best to open a new Splunk Answer specifically for that problem. But hopefully, everything is working just fine!

0 Karma

mikehale0
Explorer

No, it never was working. I needed to add
[splunktcp://9997]
to inputs.conf on the indexers and open up the firewall.

0 Karma

hexx
Splunk Employee
Splunk Employee

@mikehale0 : Has forwarding from the search-head to the indexers ever worked? Was this failure introduced when adding the suggested configuration change?

0 Karma

mikehale0
Explorer

@hexx I did try that. Upon further investigation it looks like the search heads are having trouble connecting to the indexers:

01-26-2012 18:19:40.635 +0000 WARN  TcpOutputProc - Applying quarantine to idx=x.x.x.x:8089 numberOfFailures=5
0 Karma

hexx
Splunk Employee
Splunk Employee

@mikehale0 : Did my suggestion of adding the following configuration to $SPLUNK_HOME/etc/apps/sos/local/outputs.conf not work?
[tcpout]
forwardedindex.filter.disable = true

0 Karma

mikehale0
Explorer

Care to share your configuration? I'm trying to figure out how to index the logs from my search head pool on my indexers.

0 Karma

hexx
Splunk Employee
Splunk Employee

Starting with version 2.0, the Splunk on Splunk app (a.k.a SoS) leverages distributed search to allow introspective analysis and troubleshooting on your deployment from a centralized location.

Let's cover the details of deploying the SoS app, F.A.Q-style:

- Where should SoS be installed?

The SoS app should be installed on your search-head.

- What if I have more than one search-head?

SoS can only analyze and report on Splunk instances that it can reach by means of distributed search. Unless you have a federating search-head for which the other search-heads are declared as search-peers, you should install SoS on each search-head that you wish to gain introspection on.

- What about search-head pooling? Any extra steps I need to take to install SoS on a search-head pool?

You will almost certainly need to manually restart Splunk on the pool members other than the one where SoS was installed from the user interface in order for all pool members to become aware of the app.

- Are there any requirements for SoS to work?

The SoS app depends on the UI module library of the Sideview Utils app. Make sure you install SVU (minimum version required : 1.17) prior to SoS.

- Does anything need to be installed on the search peers?

Optionally, you can install the SoS technology add-on on your Unix and Linux search peers. This add-on provides scripted inputs that track the resource usage (CPU, memory, file descriptors) of Splunk.

- Do I also need to install the SoS technology add-on on the search-head?

No, that would be redundant. The SoS app ships with the same data inputs than the technology add-on.

- What do I need to do for the SoS data inputs to track Splunk resource usage?

By default, the data inputs for SoS and SoS-TA are disabled. They must be enabled in the UI (look for ps_sos.sh and lsof_sos.sh in Manager > Data Inputs > Scripts) or from the command line (refer to the README file for more information on this) before they can report resource usage information.

- What if my search-head is configured to forward data back to the search peers? Any action required for the SoS app to work in that scenario?

If the search-head where the SoS app is installed forwards its own events to the indexers it queries (typically, to spray summarized events back to the indexers) you have to make sure that events for the _internal index are also forwarded. This is not the case by default due to the forwardedindex filters in outputs.conf, which prevents _internal events from being forwarded. To correct this, add the following configuration to $SPLUNK_HOME/etc/apps/sos/local/outputs.conf:

[tcpout]
forwardedindex.3.whitelist = _internal

- What about my heavy forwarder? Can SoS query and report on it?

Yes. Please see this Splunk Answer for details on how to achieve this.

- How can I make sure that SoS reports accurate license usage information in the Metrics view?

1. Make sure to install SoS on the search-head that acts as a license master, or at least that the license master's _internal index can be accessed by distributed search.

2. [Splunk 4.2.x only] Make sure to enable the scheduled search named sos_summary_volume_daily in Manager > Searches and Reports. This search populates the sos_summary_daily summary index, aggregating your license usage on a daily basis.

3. [Splunk 4.2.x only] Backfill the summary index "sos_summary_daily" by running the following command:

$SPLUNK_HOME/bin/splunk cmd python $SPLUNK_HOME/bin/fill_summary_index.py -app sos -name sos_summary_volume_daily -et -28d -lt -1d -j 10 -owner admin

Note: You do not need to perform steps #2 and #3 if you are running Splunk 4.0.x/4.1.x or 4.3.x.

hexx
Splunk Employee
Splunk Employee

@tzhmaba2 : Untarring over the old version is a perfectly fine method to upgrade both the app and its TA, as long as you restart splunkd everywhere and Splunkweb on the search-heads to complete the upgrade.

0 Karma

tzhmaba2
Path Finder

Hi,

Any special instructions for upgrading the App and the TA? I am currently using 2.1.

  • just untar the file over the old version?
  • delete old version and create new? Can I take over the settings somehow from the previous version?
  • I have 3 SHs and 18 IDX so please don't tell me to use GUI to upgrade... 🙂 Automated process is highly preferred.

Regards,
Bartosz

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...