Splunk Search

Can we clone the HF to another one?

Vnarunart
Explorer

I would like to seek advice from experienced professionals. I want to add another heavy forwarder to my environment as a backup in case the primary one fails (on a different network and not necessarily active-active).  * I have splunk cloud and 1 Heavy Forwarder, 1  Deployment server on premise.

1. If I copy a heavy forwarder (VM) from one vCenter to another, change the IP, and generate new credentials from Splunk Cloud, will it work immediately? (I want to preserve my existing configurations.)
2. I have a deployment server. Can I use it to configure two heavy forwarders? If so, what would be the implications? (Would there be data duplication, or is there a way to prioritize data?

Or is there a better way I should do this? Please advise.

Labels (1)
0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi @Vnarunart ,

yes, you can clone the old HF to a new one but, in addition, remember to change also the hostname in $SPLUNK_HOME/etc/system/loca/server.conf and $SPLUNK_HOME/etc/system/loca/inputs.conf.

Anyway, having a Deployment Server, you could create a new Splunk installation and manage both the HFs with the DS deploying the same apps.

Ciao.

Giuseppe

View solution in original post

0 Karma

Vnarunart
Explorer

Thank you very much for your comprehensive response. I have a follow-up question. In a scenario where we have two HF, is there a way to determine which HF the data originated from when searching in Splunk Cloud?

Thank you for your advice and time.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @Vnarunart ,

this is a request that I posted in Splunk Ideas (https://ideas.splunk.com/ideas/EID-I-1731) and it's in "Under consideration" state, if you think that's useful, please vote it!

Anyway, you could add to your Heavy forwarders a custom field with the name of the HF: https://docs.splunk.com/Documentation/SplunkCloud/9.2.2403/Data/Configureindex-timefieldextraction

in props.conf

[default]
TRANSFORMS-hf_name = my_hf_1

in props.conf:

[my_hf_1]
REGEX = .
FORMAT = my_hf_1::my_hf_1
WRITE_META = [true]
DEST_KEY = my_hf_1
DEFAULT_VALUE = my_hf_1

and then in fields.conf

[my_hf_1]
INDEXED=true

one for each HF.

Ciao.

Giuseppe

 

0 Karma

Vnarunart
Explorer

I appreciate your advice.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @Vnarunart ,

let me know if I can help you more, or, please, accept one answer for the other people of Community.

Ciao and happy splunking

Giuseppe

P.S.: Karma Points are appreciated 😉

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @Vnarunart ,

yes, you can clone the old HF to a new one but, in addition, remember to change also the hostname in $SPLUNK_HOME/etc/system/loca/server.conf and $SPLUNK_HOME/etc/system/loca/inputs.conf.

Anyway, having a Deployment Server, you could create a new Splunk installation and manage both the HFs with the DS deploying the same apps.

Ciao.

Giuseppe

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...