Getting Data In

Configure heavy forwarder to forward data to specific index on indexer cluster.

BoscoBaracus
Engager

Good morning All,

I have been trying to figure out how can I create a data input on a heavy forwarder to forward data to a specific index located on indexer cluster. I have three indexers organised in a cluster. The indexers and heavy forwarder are managed by management node. I have used Windows Universal forwarder to forward events to a particular index to indexers group (cluster) but I'm struggling to find a way of configuring similar thing on Linux based HF. Basically, what I'm trying to achieve is to configure SYSLOG port (this will be custom port, let's say 1514) to receive SYSLOG data from particular SYSLOG host and forward it to custom index created on indexers group (cluster). When adding a port in Data Inputs, I can specify local index, but not remote, clustered index.

On the HF in Data Forwarding section, I can see All are forwarded to the indexer cluster.

Would anyone know how I can achieve this?

Any help would be much appreciated.

Kind Regards,

Mike.

Labels (1)
0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi @BoscoBaracus ,

let us know if we can help you more, or, please, accept one answer for the other people of Community.

Ciao and happy splunking

Giuseppe

P.S.: Karma Points are appreciated by all the contributors 😉

View solution in original post

0 Karma

livehybrid
SplunkTrust
SplunkTrust

Hi @BoscoBaracus 

The reason that your HF does not allow you to select an index from your Indexer cluster is that it is not aware of what indexes exist on the cluster. 

To get around this problem you can create the index definition on the HF, not to index the data into, but so that it displays in the available list of indexes that you can select from in the UI.

I assume you are not using a Deployment Server here, which is why you are making changes to create the input in the UI? 

If you're able to create the inputs.conf directly on the server or via a deployment server then you shouldnt need to create the index.

You can either create the index in the UI or via a custom app with an indexes.conf file.

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

0 Karma

BoscoBaracus
Engager

Good morning livehybrid,

Thank you very much for your suggestion. This sounds exactly something I was looking for.

Could you please point me in the right direction on how to create such index definition?

Will also try to research on the subject.

Again, much appreciated.

Kind Regards,

Mike.

0 Karma

isoutamo
SplunkTrust
SplunkTrust
The easiest way is copy your indexes.conf from your cluster manager. Then just ensure that it doesn’t contains any SmartStore or other unknown targets etc. then create a new app which contains only this indexes.conf and other files which are needed by any app. Then install this app into your HF.
But as said, you should use some real syslog server instead of use splunk tcp/udp inputs for getting syslog feed into splunk. Even splunk can do it, there are some side effects with it. Probably the biggest is that you will lost all syslog events when you are restarting HF. And this could take several minutes instead of using syslog server or clustered syslog implementation.
You could easily find some old posts where we have discussed about it and give some hints how to do it.
0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @BoscoBaracus ,

completing this answer: usually HFs are managed by a Deployment Server, so you don't need to use the GUI to configure inputs, and so you can set the correct index in the conf file.

Ciao.

Giuseppe

0 Karma

BoscoBaracus
Engager

Good morning gcusello,

Aha, makes sense.

Will try to move the HF to a deployment server then:-)

Much appreciated.

Kind Regards,

Mike.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @BoscoBaracus ,

good for you, see next time!

let us know if we can help you more, or, please, accept one answer for the other people of Community.

Ciao and happy splunking

Giuseppe

P.S.: Karma Points are appreciated by all the contributors 😉

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @BoscoBaracus ,

let us know if we can help you more, or, please, accept one answer for the other people of Community.

Ciao and happy splunking

Giuseppe

P.S.: Karma Points are appreciated by all the contributors 😉

0 Karma

BoscoBaracus
Engager

Good morning gcusello,

Many thanks for your prompt response.

I'm not sure if I was unclear, but I do not understand your suggestion. We do not have any combined roles. Our indexer cluster (three indexers) are managed by dedicated, separate SPLUNK management node. Heavy forwarder is a separate, standalone SPLUNK HF managed by the management console. We also have separate search heads. All according to good practices as far as I'm concerned.

I already have few applications installed on the HF which correctly forward data to the index group to specific indexes. I know how to configure inputs.conf for a particular application to forward to indexer group and specific index.

My question is: how can I configure receiving port under Data Inputs (TCP or UDP) to forward to indexer group to a specific index. I may have several, different sources (SYSLOG etc) which I want to forward to indexer group into separate, dedicated indexes. I don't want to mix data from different data sources into the same index.

I hope that clarifies things a bit.

Kind Regards,

Mike.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @BoscoBaracus ,

only to be clear:

  • Indexers are managed by a  management node called Cluster Manager,
  • Heavy Forwarders are managed by a management console called Deployment Server,
  • there two roles must be located on two different Splunk servers.

About syslog ingestion, you could use Splunk HF Network inputs, but it isn't a best practice.

The best approach is to configure, on your HFs, one or more rsyslog inputs that receive syslogs and write them in different text files.

Then you can read these text files using one or more file monitoring inputs  and ingest them in Splunk.

You can configure the destination index in these Splunk input files.

To configure rsyslog inputs, you can read at https://www.rsyslog.com/doc/index.html

to configure Splunk file monitring inputs, you can read at https://docs.splunk.com/Documentation/Splunk/9.4.1/Data/MonitorfilesanddirectorieswithSplunkWeb

Ciao.

Giuseppe

0 Karma

BoscoBaracus
Engager

Good morning gcusello,

Again, many thanks for your response and suggestion.

Not sure why we have to install rsyslog to receive data locally on the HF and then monitor the ingress into a file just to forward to indexer group. SPLUNK has built in tons of data inputs options which should easily accommodate this. I can do this on Windoze server in about 5 mins using UF. Not sure why it is so difficult to forward data to dedicated index residing on remote indexer directly from HF.

Will keep on digging.

Again, many thanks for your time and suggestions.

Kidn Regards,

Mike.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @BoscoBaracus ,

as I said, you can do it and surely easier than the conf file, but it isn't a best practice, because you must manually configure it, instead using a conf file, managed by the DS, you have a centralized management.

About configuration on Windows UF, you cannot use the GUI because UF hasn't any GUI, you must configure inputs using the conf files or a CLI command.

In addition, using Splunk Network inputs, when you restart Splunk for maintenance or something else, you lose syslogs, instead using rsyslog, that's a standard Linux component (you don't need to install it!), you can receive logs also when Splunk is down.

So I hint, based on Splunk best practices, I hint to use rsyslog, but you're free to use a different solution.

Ciao.

Giuseppe

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @BoscoBaracus ,

at first, clustered Indexers are managed by the Cluster manager and Heavy Forwarders by Deployment Server, and it isn't a best practice to use the same server for both the roles, especially if the DS must manage more than 50 clients.

Anyway, the situation is the same:

  • on the HF, you have to configure all log forwarding to the Indexers,
  • on the HF you have to create soma inputs indicating the indexes to store data,

in this way all your logs are forwarded to the correct indexes.

Just some addition hints:

As anticipated, don't use the Cluster Manager as Deployment Server, use a different server, possibly dedicated: if you have few clients to manage (less than 50) you can use another server of your Splunk infrastructure, but not Cluster Manager or Searcjh Head or Indexers.

Ciao.

Giuseppe

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Can’t Make It to Boston? Stream .conf25 and Learn with Haya Husain

Boston may be buzzing this September with Splunk University and .conf25, but you don’t have to pack a bag to ...

Splunk Lantern’s Guide to The Most Popular .conf25 Sessions

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Unlock What’s Next: The Splunk Cloud Platform at .conf25

In just a few days, Boston will be buzzing as the Splunk team and thousands of community members come together ...