Deployment Architecture

With search head pooling, is it possible to run some apps off the shared area, and some locally to the server?

Glenn
Builder

I'd like to take advantage of search head pooling mainly because of its handling of scheduled searches (I'd be happy to go with rsync-ing data, but can't come up with a good way to deal with turning off all the schedules). Plus there are other benefits.

However, for ease of installation, we set up the search heads' system config (such as authentication.conf, web.conf, etc.) via apps pulled from the deployment server. They do not live in the main $SPLUNK_HOME/etc/system area, but under $SPLUNK_HOME/etc/apps. We also have a set of normal knowledge apps with views etc.

I enabled search head pooling in the normal way, copied the user directories and only my knowledge/view based apps to the shared storage. Because of the line "Important: You can choose to copy over just a subset of apps and user subdirectories" in the documentation http://docs.splunk.com/Documentation/Splunk/4.3.1/Deploy/Configuresearchheadpooling I expected that apps in both the shared storage and $SPLUNK_HOME/etc/apps would show up, but they don't. Only the apps in the shared storage do, which means all the system config is missing, which means Splunk doesn't work (runs on wrong port, can't authenticate etc).

Is there a way to use both shared and non-shared apps at the same time with search head pooling?

Cheers,

Glenn

0 Karma
1 Solution

Damien_Dallimor
Ultra Champion

When using Search Head Pooling only the etc/system directory will be local to the Search Head.

etc/users and etc/apps are referenced off the shared storage.

Agreed that the documentation is a tad ambiguous.

But if you think about it , say 2 search heads were in a pool behind a load balancer , if they had "non-shared" apps, then the user experience would differ based on which Search Head they got routed to.

So if you want to use Deployment Server to push out your Search Head config , then in serverclass.conf you could try specifying the targetRepositoryLocation property to be the path to your shared storage (the default path is the deployment client's local etc/apps directory)

More docs here

View solution in original post

0 Karma

rmorlen
Splunk Employee
Splunk Employee

I'm not exactly sure what you mean by "Only the apps in the shared storage do, which means all the system config is missing, which means Splunk doesn't work (runs on wrong port, can't authenticate etc)."

We control most of our environment using apps. Now we do have settings within authentication.conf and authorize.conf that are unique to specific servers. We will put the "common" stuff within the app (e.g. splunkserverapp/local/authorize.conf) and this gets deployed to using deployment server like any other app. For the server specific settings we put them where they belong in $SPLUNK_HOME/etc/system/local/authenticate.conf and $SPLUNK_HOME/etc/system/local/authorize.conf.

Works like a champ.

0 Karma

rmorlen
Splunk Employee
Splunk Employee

We run Pooling with 4 searchheads. We have only one of those searchheads setup as a Deployment Client. We disable this on the the other 3 searchheads. On the 3 searchheads in the $SPLUNK_HOME/etc/system/local/deploymentclient.conf we have:

[deployment-client]
disabled = true

For the server that we want to be the deployment client the $SPLUNK_HOME/etc/system/local/deploymentclient.conf we have:

[deployment-client]

disabled = false

phoneHomeIntervalInSecs = 1800

This allows only one of those searchheads to receive app updates from the deployment server.

We then have an app called serverdeploymentclient that has in the local directory a deploymentclient.conf file that contains:

[target-broker:deploymentServer]

targetUri= splunkdm:8089

0 Karma

rmorlen
Splunk Employee
Splunk Employee

Ignore the GIANT font. That was commented out.

0 Karma

rmorlen
Splunk Employee
Splunk Employee

Sorry. For the serverdeploymentclient app we also have:

[deployment-client]

disabled = false

serverEndpointPolicy = acceptAlways

phoneHomeIntervalInSecs = 600

repositoryLocation = /splunknfs/splunk/etc/apps

repositoryLocation = /opt/splunk/etc/apps

serverRepositoryLocationPolicy = rejectAlways

0 Karma

lmyrefelt
Builder

You could try to create two "Server / Deployment classes" for your search-heads. One that distributes data to your search-head that are "in controll" of the shared data pool (ie. all apps that should go to all SH:s) . Than add another deployment classes were you send the data you want to have installed under the $SPLUNK_HOME/etc/apps dir.

0 Karma

Kzark
New Member

What about the Deployment Monitor app though? Its instructions state that if you're using search head pooling you have to run it on only SH. If pooled search heads can only run the apps in shared storage, how do you do this?

From here:

•Multiple search heads. If you have multiple search heads and you have enabled search head pooling on them, you need to enable the deployment monitor on only one search head. (It's best to enable search head pooling before enabling the deployment monitor.) If you enable the deployment monitor on just a single search head without setting up pooling across all your search heads, you will see no or incomplete data, limited to the indexers communicating with that particular search head.

0 Karma

Damien_Dallimor
Ultra Champion

When using Search Head Pooling only the etc/system directory will be local to the Search Head.

etc/users and etc/apps are referenced off the shared storage.

Agreed that the documentation is a tad ambiguous.

But if you think about it , say 2 search heads were in a pool behind a load balancer , if they had "non-shared" apps, then the user experience would differ based on which Search Head they got routed to.

So if you want to use Deployment Server to push out your Search Head config , then in serverclass.conf you could try specifying the targetRepositoryLocation property to be the path to your shared storage (the default path is the deployment client's local etc/apps directory)

More docs here

0 Karma
Get Updates on the Splunk Community!

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...

Cloud Platform & Enterprise: Classic Dashboard Export Feature Deprecation

As of Splunk Cloud Platform 9.3.2408 and Splunk Enterprise 9.4, classic dashboard export features are now ...