Getting Data In

Is there a way to get the <hostname_HF> automatically assigned, with a token or extracted from the default fields?

dkeck
Influencer

HI,

I try to figure out a way to create a new field on a heavy forwarder. I want to add the field "splunk_parser" to every event, like the field "splunk_server", to be able to understand where an event was parsed and from which HF the event is coming.

The best way would be to get the hostname from the linux machine directly. For a small amount of HF I could specify the hostname manually, but not for a bigger number of HFs.

So what I came up with is this:

 

 

props.conf
[host::*]
TRANSFORMS-splunk_parser= splunk_parser_ext


transforms.conf
[splunk_parser_ext]
INGEST_EVAL = splunk_parser="<hostname_HF>"

fields.conf
[splunk_parser]
INDEXED=true

 

 

 Is there a way to get the <hostname_HF> automatically assigned? with a token or extracted from the default fields?!

Any hint is highly appreciated.

Thank you

David

Labels (3)
0 Karma
1 Solution

jotne
Builder

You are  close 🙂

 

We also needed to get the name  of the HF the data passed, so here is the solution we found.

Make an app and send it to all HF servers:

transforms.conf
[set_hf_server_name]
INGEST_EVAL = splunk_hf_name := splunk_server

props.conf
[source::...]
TRANSFORMS_set_hf_server_name = set_hf_server_name

source:...
mens do some with any packet passing this server

INGEST_EVAL = splunk_hf_name := splunk_server
get the name of the current Splunk server and set it to the field splunk_hf_name

View solution in original post

jotne
Builder

You are  close 🙂

 

We also needed to get the name  of the HF the data passed, so here is the solution we found.

Make an app and send it to all HF servers:

transforms.conf
[set_hf_server_name]
INGEST_EVAL = splunk_hf_name := splunk_server

props.conf
[source::...]
TRANSFORMS_set_hf_server_name = set_hf_server_name

source:...
mens do some with any packet passing this server

INGEST_EVAL = splunk_hf_name := splunk_server
get the name of the current Splunk server and set it to the field splunk_hf_name

dkeck
Influencer

Hi,

thank you for the solution, works great 🙂

0 Karma

jotne
Builder

You are welcome.

We did this in two level.  Server that do collect data (syslog/hec/Azure) get a field called splunk-collector-name.  The data passing a set of heavy forwarders (to tag customer name and filtering) where we add a field called splunk-hf-name.  This way we can see what syslog server it passes and what heavy forwarder.  Also easy to see if load balanse og syslog/hf etc works fine.

dkeck
Influencer

did you get this working for UFs as well? If the UF is an intermediate Forwarder or syslog forwarder.

I have the fealing they don´t have a "splunk_server" field to use.

0 Karma

jotne
Builder

I have not tried it on UF.  All  UF do have a hostname that you will see.  Syslog that sends to a server do not have a hostname directly (if its not set in the packet it self).  

0 Karma

dkeck
Influencer

hm, no one?

Any other way to get this done with another approach?

0 Karma
Get Updates on the Splunk Community!

Take Your Breath Away with Splunk Risk-Based Alerting (RBA)

WATCH NOW!The Splunk Guide to Risk-Based Alerting is here to empower your SOC like never before. Join Haylee ...

SignalFlow: What? Why? How?

What is SignalFlow? Splunk Observability Cloud’s analytics engine, SignalFlow, opens up a world of in-depth ...

Federated Search for Amazon S3 | Key Use Cases to Streamline Compliance Workflows

Modern business operations are supported by data compliance. As regulations evolve, organizations must ...