Getting Data In

How can I control or force the hostname to be a specific value via inputs.conf?

AK_Splunk
Explorer

How can I control or force the hostname to be a specific value via inputs.conf?

Inputs.conf stanza

[monitor:///var/log/*]
disabled = 0
index = test_data
host = hostname1,hostname2,hostname3,hostname4

[monitor:///var/adm/*]
disabled = 0
index = test_data
host = hostname1,hostname2,hostname3,hostname4

[monitor:///etc/*]
disabled = 0
index = test_data
host = hostname1,hostname2,hostname3,hostname4

I have tried multiple solution

Case 1--> added host_regex=<regular expression>  this did not work.
Case 2 --> added host= hostname1,hostname2,hostname3,hostname4 ---> this worked for some log file path like .var/log/message host = hostname1 and for some log file path like /var/log/dnf.log I am getting hostname=hostname1,hostname2,hostname3,hostname4
Case 3 --> I tried utilizing the feature where inputs.conf looks like below. But I can not implement this as a solution as the app will be pushed from DS to all UF respectively hence hardcoding using default setting can not be done in my case.
inputs.conf

[default]
host = <hostname1>

[monitor:///var/log/*]
disabled = 0
index = test_data

[monitor:///var/adm/*]
disabled = 0
index = test_data

[monitor:///etc/*]
disabled = 0
index = test_data

 

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @AK_Splunk,

let me understand: where do you whould take the hostname value to associate to an event, from a path segment or from the UF hostname, or a part of the log itself, or where else?

Anyway, using the "host" option in inputs.conf, you can assign one fixed value to the host field.

You can also extract the host field from the path of the file or from a part of the name.

outside inputs.conf (using props.conf and transforms.conf on Heavy Forwarders or Indexers), you can also extract the host from the content of the event overriding the default value.

Ciao.

Giuseppe

0 Karma
Get Updates on the Splunk Community!

SOC4Kafka - New Kafka Connector Powered by OpenTelemetry

The new SOC4Kafka connector, built on OpenTelemetry, enables the collection of Kafka messages and forwards ...

Your Voice Matters! Help Us Shape the New Splunk Lantern Experience

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Building Momentum: Splunk Developer Program at .conf25

At Splunk, developers are at the heart of innovation. That’s why this year at .conf25, we officially launched ...