Splunk Search

Can we clone the HF to another one?

Vnarunart
Explorer

I would like to seek advice from experienced professionals. I want to add another heavy forwarder to my environment as a backup in case the primary one fails (on a different network and not necessarily active-active).  * I have splunk cloud and 1 Heavy Forwarder, 1  Deployment server on premise.

1. If I copy a heavy forwarder (VM) from one vCenter to another, change the IP, and generate new credentials from Splunk Cloud, will it work immediately? (I want to preserve my existing configurations.)
2. I have a deployment server. Can I use it to configure two heavy forwarders? If so, what would be the implications? (Would there be data duplication, or is there a way to prioritize data?

Or is there a better way I should do this? Please advise.

Labels (1)
0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi @Vnarunart ,

yes, you can clone the old HF to a new one but, in addition, remember to change also the hostname in $SPLUNK_HOME/etc/system/loca/server.conf and $SPLUNK_HOME/etc/system/loca/inputs.conf.

Anyway, having a Deployment Server, you could create a new Splunk installation and manage both the HFs with the DS deploying the same apps.

Ciao.

Giuseppe

View solution in original post

0 Karma

Vnarunart
Explorer

Thank you very much for your comprehensive response. I have a follow-up question. In a scenario where we have two HF, is there a way to determine which HF the data originated from when searching in Splunk Cloud?

Thank you for your advice and time.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @Vnarunart ,

this is a request that I posted in Splunk Ideas (https://ideas.splunk.com/ideas/EID-I-1731) and it's in "Under consideration" state, if you think that's useful, please vote it!

Anyway, you could add to your Heavy forwarders a custom field with the name of the HF: https://docs.splunk.com/Documentation/SplunkCloud/9.2.2403/Data/Configureindex-timefieldextraction

in props.conf

[default]
TRANSFORMS-hf_name = my_hf_1

in props.conf:

[my_hf_1]
REGEX = .
FORMAT = my_hf_1::my_hf_1
WRITE_META = [true]
DEST_KEY = my_hf_1
DEFAULT_VALUE = my_hf_1

and then in fields.conf

[my_hf_1]
INDEXED=true

one for each HF.

Ciao.

Giuseppe

 

0 Karma

Vnarunart
Explorer

I appreciate your advice.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @Vnarunart ,

let me know if I can help you more, or, please, accept one answer for the other people of Community.

Ciao and happy splunking

Giuseppe

P.S.: Karma Points are appreciated 😉

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @Vnarunart ,

yes, you can clone the old HF to a new one but, in addition, remember to change also the hostname in $SPLUNK_HOME/etc/system/loca/server.conf and $SPLUNK_HOME/etc/system/loca/inputs.conf.

Anyway, having a Deployment Server, you could create a new Splunk installation and manage both the HFs with the DS deploying the same apps.

Ciao.

Giuseppe

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...