Getting Data In

Configure UF - To Clustered index


Hello everyone,


I want to create new Splunk environment. i'm still in a learning process - i'm new to splunk. My environment today includes:

- 3 indexers
- 3 search heads
- 1 cluster master that also serves as a License master

- 1 Universal forwarder

** [ All servers are Linux servers ] **



I want to build my Splunk environment in a "Cluster configuration mode":

 I want to send data from the universal forwarder to the Cluster master, and from there to my indexers.

My main target is to collect logs from different application servers [by sending me syslog or http] in order to monitor their status:  I want to create a unique index for each app: for example, the logs that are sent from an app called app1 will go into an index called "index_app1"


I would like to get help with those following questions:

1. How can i check if the cluster master know the universal forwarder? How do I check it?

2. I want to understand how I configure in the "Inputs.conf file" of a my universal forwarder: 

I want to allow each app to send logs to uf in different port [in tcp or in udp]:

 for example:
- Application A will send logs to my universal-forwarder in port 4928 , application B will send logs to my universal-forwarder in port 4929

3. How can I send the messages to the cluster master and to recognize what the correct index which the messages belong: 

All messages that which sent from application A will be under index_app1

All messages that which sent from application B will be under index_app2 

Thank you for help!

0 Karma


I believe by "cluster configuration mode" you are referring to the "indexer cluster" and and "indexer discovery" features of Splunk.  An indexer cluster enables indexers to hold backup copies of each other's data in case of the failure of an indexer.  Indexer Discovery is where servers with data to send to an indexer consult the Cluster Master (CM) to find out which indexer to use.

In no case does data flow through the CM.  The Cluster Master is a manager, not a conduit.

Apps that send data in syslog or HTTP format should not be talking to a universal forwarder as a UF understands neither protocol.

For syslog, the apps should be sending to a dedicated syslog server which saves the data to disk files.  The UF then monitors those disk files and sends the data to indexers.  Alternatively, you could use the Splunk Connect for Syslog product to collect syslog data and forward it directly to the indexers.

For HTTP, apps should send to a heavy forwarder (HF) with HTTP Event Collector (HEC) enabled.  Another option is to enable HEC on your indexers and use a load balancer to distribute the events evenly among the indexers.

It adds no value to prefix index names with "index_".

To answer your questions:

1) The CM does not know about universal forwarders.  That's the job of the Monitoring Console and, optionally, the Deployment Server.

2) To learn how to configure inputs on a UF, see   As mentioned above, you will not be configuring input ports and the UF. 

3) Applications do not send messages to the CM.  As already mentioned, applications send data to forwarders, indexers, or syslog servers.

The index to which data should be written is defined in the inputs.conf file on the forwarder or in the HEC stream.

If this reply helps you, Karma would be appreciated.
0 Karma

Esteemed Legend

Hi @dordavid,

your isn't a question, it's a consultancy! 😉

Anyway, to better understand your question, it better to divide them in three parts:

  1. how to configure and deploy configurations to Universal Forwarders;
  2. how to take logs;
  3. how to send logs to Indexers' Cluster.


installation and upgrade isn't managed (for now, but it's changing!) by Splunk, so you have to do this manually.

Configurations checking and pushing is managed by a dedicated server called Deployment Server: in a lab you can also share this role with another server (not Master Node, Indexers and Search Heads), but in a production environment (more than 50 target servers) you have to use a dedicated server.

To understand how to do this read  carefully

Anyway, the steps are:

  • plan your deployment listing all the servers to manage with Deployment server and identifying for each the apps to deploy;
  • check that all the firewall routes are open between:
    • UFs and Deployment Server on port 8089,
    • UFs and Indexers and Master Node on port 9997;
  • install a Deployment Server, possibly on a dedicated server,
  • install Universal Forwarder on the target servers;
  • create a Technical Add-On (called e.g. TA_Forwarders) containing two files (deploymentclient.conf and outputs.conf):
    • in the first put the address of the Deployment Server,
    • in the second the addresses of the indexers or of the Master Node);
  • copy the TA_forwarders in every target server in $SPLUNK_HOME/etc/apps;
  • restart Splunk in every targer server.

When the target servers are connected to Deployment Server, you'll be able to see them in [Settings -- Forwarder Management].


To take logs you have many ways:

if you have logs on files you can use a Universal Forwarder, using the features described at and especially the monitor stanzas.

if instead you have to take syslogs, you can copy them on filesystem and read them as the precious point; if instead you want to directly take syslogs, you cannot use a Universal Forwarder, but you have to use an Heavy Forwarder (a full Splunk instance where all the logs are forwarded to Indexers), and follow the instructions at


to send logs to Indexers, you have to use the  Indexers Discovery Method that you can find at

About the question of where to set the correct index, index is set in inputs.conf (item 2).



0 Karma
Get Updates on the Splunk Community!

Build Scalable Security While Moving to Cloud - Guide From Clayton Homes

 Clayton Homes faced the increased challenge of strengthening their security posture as they went through ...

Mission Control | Explore the latest release of Splunk Mission Control (2.3)

We’re happy to announce the release of Mission Control 2.3 which includes several new and exciting features ...

Cloud Platform | Migrating your Splunk Cloud deployment to Python 3.7

Python 2.7, the last release of Python 2, reached End of Life back on January 1, 2020. As part of our larger ...