All Apps and Add-ons

Installing Website Monitoring on Distributed Environment

Tohrment
Path Finder

Wanting to install Website monitoring in a distributed environment. I know we can directly install the app on a Heavy Forwarder but from a management standpoint that would make things complicated. This is mainly due to our process being deployment through a Master Node. Anyway, I just wanted some confirmation on how this monitoring might work in a distributed environment like ours. If distributed through the Master Node is there anything to prevent ALL of our HFs from testing the same site simultaneously an multiplying out ingest by a large factor? We have a pretty large environment so we have several heavy forwarders.

Tags (1)
0 Karma
1 Solution

chrisyounger
SplunkTrust
SplunkTrust

Typically you will just want to have the website monitor being run from just one heavy forwarder. You can do this a few way:
- push the "Website monitor" app to all your heavy forwarders from the deployment server. then manually activate the inputs only on one of the HWFs.
- push the app to one heavy forwarder (use a new server class) and define the inputs there.
- Push the app to a heavy forwarder, also push an app that has the activated inputs. (only a good option if you are very familiar with splunk conf files).
- install the app using the HWF app installation UI. Activate the inputs on it.

When you have a large environment like what it seems you have, it gets pretty crazy trying to use the deployment server to manage everything centrally. Most big customers eventually switch to using Ansible/Puppet/Chef combined with git version control. Its less confusing and less risky if you ONLY use your deployment server for pushing config to your universal forwarders.

View solution in original post

chrisyounger
SplunkTrust
SplunkTrust

Typically you will just want to have the website monitor being run from just one heavy forwarder. You can do this a few way:
- push the "Website monitor" app to all your heavy forwarders from the deployment server. then manually activate the inputs only on one of the HWFs.
- push the app to one heavy forwarder (use a new server class) and define the inputs there.
- Push the app to a heavy forwarder, also push an app that has the activated inputs. (only a good option if you are very familiar with splunk conf files).
- install the app using the HWF app installation UI. Activate the inputs on it.

When you have a large environment like what it seems you have, it gets pretty crazy trying to use the deployment server to manage everything centrally. Most big customers eventually switch to using Ansible/Puppet/Chef combined with git version control. Its less confusing and less risky if you ONLY use your deployment server for pushing config to your universal forwarders.

Tohrment
Path Finder

ok that was what I was looking for. Just confirmation on what I thought to be the case but hoped wasn't lol. Thank you Chris. If you could move your comment out I will mark it as the answer

0 Karma

chrisyounger
SplunkTrust
SplunkTrust

glad it helps mate. one day I expect Splunk will add better support for managing large environments from a central location...

0 Karma

chrisyounger
SplunkTrust
SplunkTrust

I assume you mean cluster master? It is very unusual that you are pushing config to your heavy forwarders from the master node.

In any case, most people don't run their heavy forwarders in a clustered style like you have mentioned, i.e. where they always do the same tasks. This would result in duplicated data unnecessary data.

Most customers I see, just manage each heavy forwarder individually and configure polling-style inputs using apps they install via the UI directly and by configuring inputs using the UI.

If you are using your heavy forwards to act as a "parsing layer" then this is typically a (common) anti-pattern. Read this blog: https://www.splunk.com/blog/2016/12/12/universal-or-heavy-that-is-the-question.html

0 Karma

Tohrment
Path Finder

I apologize, still fairly new to the Cluster Master(what I accidentally referred to as a master node), Deployer Server, Deployment server trinity. The HF are not clustered but we do have quite a few.

We use a deployment server to add new apps to the HF. What I am trying to figure out is how to control this from a central point without having to log on to each HF to update any URL monitors we implement while not creating a massive amount of duplicate data.

In the case of a UF vs a HF I have not found any documentation where I can use this from a UF.

In the case of the other apps it was easy to know exactly how it operated since they are monitoring a local resource through a UF. Just set up the parameters and deploy to the UF, boom you got data and it is centrally managed. It also helps that they have install steps in the case of a distributed environment such as ours whereas the page for this one does not. I have only been able to find it where you directly install it on the HF and manage it from the Web UI of said forwarder.

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...