Knowledge Management

What is the best practice for installing and managing apps in a distributed environment?

jagadeeshm
Contributor

We have Splunk installation in a distributed environment with search head clustering and indexer clustering enabled and managed via a master node.

We are currently in the process of ingesting network logs into Splunk.

During the POC (non-clustered env) we installed several networking Splunk apps/add-ons like Arista Switch Source, F5 Sources, Palo Alto Firewall, etc. We deployed them on the search head and configured them to directly read syslogs via some UDP port. [Network devices are sending logs to syslog directly without the need of UF].

Now in PROD, we are in a clustered environment and I am wondering what is the best way to manage configurations? I see at least 2 options -

1 - Install the apps on the search heads and configure the app same way we did in POC, where the search head is reading the data of the UDP port and forwarding it to Indexers

2 - Install the apps on the search heads but don't use the app to configure the inputs and source types. Manage them outside the app's installation and push them via master node to the indexers. This way, load is on the Indexers where they are reading the data off the UDP ports and are indexing.

Are there other approaches (help outline pros and cons) without using UF?

Thanks!

0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi jagadeeshm,
To ingest logs from network (TCP or UDP) I suggest to use two dedicated Heavy Forwarders to use only as syslog server with in front a load balancer to avoid to lose some logs during maintenance.
In this way you separate syslog ingestion from indexing and searching.
This means that you have to build an App structured in different TAs:

  • Main App for Search Heads,
  • TA_Indexers to contain indexes.conf, props.conf and transforms.conf;
  • TA_HF to contain inputs.conf for syslog (TCP or UDP);
  • TA_Forwarders (if you have to ingest logs from servers)

About the main apps you can install them on the Deployer and deploy them to the Search Heads or only install them on one Search Head and (if you configured) apps will be replied to the other Search Heads (see http://docs.splunk.com/Documentation/Splunk/6.5.0/DistSearch/AboutSHC).

Bye.
Giuseppe

View solution in original post

aaraneta_splunk
Splunk Employee
Splunk Employee

Hi @jagadeeshm - Did one of the answers below help provide a solution you were looking for? If yes, please don't forget to click "Accept" below the best answer to resolve your post and up-vote any comments that were helpful. If you still need help, please leave a comment to provide some additional feedback. Thanks!

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi jagadeeshm,
To ingest logs from network (TCP or UDP) I suggest to use two dedicated Heavy Forwarders to use only as syslog server with in front a load balancer to avoid to lose some logs during maintenance.
In this way you separate syslog ingestion from indexing and searching.
This means that you have to build an App structured in different TAs:

  • Main App for Search Heads,
  • TA_Indexers to contain indexes.conf, props.conf and transforms.conf;
  • TA_HF to contain inputs.conf for syslog (TCP or UDP);
  • TA_Forwarders (if you have to ingest logs from servers)

About the main apps you can install them on the Deployer and deploy them to the Search Heads or only install them on one Search Head and (if you configured) apps will be replied to the other Search Heads (see http://docs.splunk.com/Documentation/Splunk/6.5.0/DistSearch/AboutSHC).

Bye.
Giuseppe

jagadeeshm
Contributor

OK, so where do I install my apps in this case? Arista Networks for example?

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi jagadeeshm,

You have to follow documentation of each App: in the specific documentation it's usually described how to install a distributed environment.
If not exists:
On Search Heads:
- copy App in one Search Head and it will be replicated on all the others,
- disable network inputs,
- disable indexes in Indexers.conf (copy yourapp/default/indexes.conf in yourapp/local/indexes.conf and add to each index "disable=1"
On Indexers:
- copy App in Master Node (if you have Indexers Cluster) or in all Indexers (using Deployment Server if you haven't an Indexers Cluster or manually).
- disable network inputs (if you have Heavy Forwarders) otherwise enable them.

About networks inputs, you have to decide where to put them: if you can, the best solution is to have two dedicated Heavy Forwarders with a Load Balancer where to install the App and enable network inputs, otherwise you can enable Network Inputs on Indexers.
Note that you need in every case of a Load Balancer to distribute syslogs between servers, otherwise you could have a single point of failure.

Bye.
Giuseppe

0 Karma

jagadeeshm
Contributor

Gotche. So the recommendation is to install the apps, but maintain the configuration files manually, which means by overriding the app setting before deploying into Splunk.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi jagadeeshm,
if you make the operation I described (copy yourapp/default/indexes.conf in yourapp/local/indexes.conf) if the best way to manually maintain your app.
Bye.
Giuseppe

0 Karma

koshyk
Super Champion

We centrally manage the code in "git" repo, push it to Staging Server for consistency.

For rest of your cluster
- Forwarders using deployment server
- Indexers using cluster master

0 Karma

jagadeeshm
Contributor

What you mentioned is a very general practice which we adopted in our PROD cluster as well. My question is more related to the configuration that we need to manage, that are part of the apps.

0 Karma

jagadeeshm
Contributor

For example, we manage all the indexes we create via Git. But AristaNetworks app has a file indexes.conf that is creating app specific index. Do we manually remove this conf file and manage it outside the app ? Things like are what I am worried about. Most important one is where do you define your inputs when you are using apps? Most of the apps come with wizards, which when used create inputs on the search heads? Question is how do you manage those?

0 Karma

koshyk
Super Champion

we never use "index" names from specific app due to provide granular user permissions. We just use our own index name, but same properties of the index the app specifies. Sourcetype etc should exactly match the app's built in.

We create an app called "mycompany_indexes_config/local/indexes.conf". Manage all indexes in there. That will be the central app to be pushed to all PROD/TEST and you will have consistent indexes across

0 Karma
Get Updates on the Splunk Community!

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

Get Inspired! We’ve Got Validation that Your Hard Work is Paying Off

We love our Splunk Community and want you to feel inspired by all your hard work! Eric Fusilero, our VP of ...