Getting Data In

Updating cluster peer nodes in real time for a APP that is changing in real time

robertlynch2020
Motivator

Hi

What is the best way to make sure your nodes are getting real time updates if your app is updating all the time?

step 1 ) Use the deployment server to copy files from the master node to the master-app directory $SPLUNK_HOME/etc/master-apps

Step 2) The master-app directory will use the "configuration bundle" to sent out the updates?

If these steps are correct, how do you make sure step 1 and 2 happen automatically?

Thanks

Robert 

Labels (1)
0 Karma

richgalloway
SplunkTrust
SplunkTrust
Why is the app changing so often? What kind of changes are happening? Is it necessary for those changes to propagate immediately? If the changes require a rolling restart then understand that making rapid changes will affect the performance of your indexer cluster.
The only supported method for updating apps on an indexer cluster is via the Cluster Master.
---
If this reply helps you, Karma would be appreciated.
0 Karma

robertlynch2020
Motivator

Hi 

Thanks for getting back, so I will list out the changes and frequency. I am not sure what needs a rolling restart, I am on Splunk 8.0.3.

What is changed frequently?

Lookuptable.csv  - This is changed a lot by alerts that running every minute in the system (So I am not sure if each node needs to have access to its own up-to-date lookuptable.csv or will this be done on the search head?)

What is changing once a day

Dashboards updated and new ones added

What is changed infrequently

  • props.conf
  • index.con
  • transformes.conf

Any help is wonderfully

Cheers

Rob

Tags (1)
0 Karma

anm_mporter
Explorer

Information related to searches, such as lookup tables, are replicated to the indexers directly via Knowledge Bundle Replication. Basically, every time the the local contents of an app on the search head changes, those changes are replicated to the search heads. This is why you only need apps on your indexers that perform index time operations. The system even keeps track of which search head the bundles are coming form so that search heads can have different versions of the same assets and the indexers use the right one for the right search.

Be careful of large lookups, however. Because the bundles need to be sent every time lookups change, large bundles can create a significant search latency and can delay when the contents of the lookup are actually used in searches .

For frequently changing lookups, consider preventing the the lookup from replicating by using local=true in your lookup search command. This will prevent the lookup from replicating and wait until data reaches the search head before performing the lookup. The advantage is instant access to new lookup values and no search latency due to frequent bundle updates. The downsides are that you can't leverage multiple indexers so filtering on lookup values may be slow (tip: do the lookup as late in the chain as possible), and you can't use the lookup in Data Models.

If you have to use your frequently changing lookup on indexer, consider truncating the lookup frequently to ensure it is small. I do this for an ip-to-username lookup used in a an Accelerated data model. The source query only grabs the latest login per machine/user and only runs every hour so that the bundles are small and replicate less often

 

Back to the larger question of automatic master node pushes, you can probably do this, but you shouldn't. If you set up your deployment server and master node correctly you can push packages to the Master node's master-apps directory instead of the app directory. And here is a presentation with the details of the command line to validate and push a cluster bundle. Combine that with cron and you have something like automated app pushes to the indexers.

The biggest problems are: what happens if the bundle fails validation? What happens if there is a problem during deployment and rolling restart? I strongly recommend deploying cluster bundles during a maintenance window and manually monitoring the process. A safe partial automation is to set up deployment to the to the master node like above, so all you have to do from the GUI is validate and deploy.

Another tip: consider creating Intermediate heavy Forwarders (IF) to capture your indexing traffic before it hits your index cluster, i.e. set forwarders to use the IF layer as their "indexers". This effectively removes parsing from your index layer AND means almost every app you would have deployed to your indexers now goes on the the intermediate forwarders, which will happily work with your deployment server. Search time configuration still gets replicated to the index cluster via Knowledge Bundle Replication. 

richgalloway
SplunkTrust
SplunkTrust

Thanks for clarifying the question.

Lookup files are included in the search bundle that is sent from the search head to the indexers.  There is no need for your app to update it.  The exception is if the lookup file is very large and pushes the size of the bundle past 1GB.

Dashboards are not used on indexers so there's no there's no need to push them.

Infrequent changes to config files should be pushed from the Cluster Master as needed.

---
If this reply helps you, Karma would be appreciated.
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...