I have a heavy forwarder that is capturing incoming logs from thousands of Linux hosts. The hosts are sending their OS logs. As known, Linux logs do not identify themselves with an IP in their log sources.
Is their a way to capture their IP, from the receiving port, and parse it to a new field, such as src_ip?
I know we can add identifying information in the hosts outputs.conf file but we are unable to do that due to the circumstances.
The reason I am trying to accomplish this is because a lot of the hosts have a generic name such as "Linux" which is not of value as does not help from an analytical perspective.
Are you collecting all of the linux logs with a syslog server? If so, you could have you syslog server write the incoming data out to a directory with one of the parent directories labeled as the source IP. Then you could parse out the source IP from the
source field in Splunk
You can use connection_host = ip in the inputs.conf to force the logs coming from that linux host to have 'ip' in 'host' field.
This way, you will be able to check the logs coming from each linux servers (using IP which is unique). Also, splunk assigns hostname upon install - check in $SPLUNK_HOME/etc/system/local/inputs.conf
How would that work @lakshman239 ? The UF is on each linux box itself, so either receiving syslog from local host, or using a file monitor input where that setting is not even available as far as I know.
Yes @FrankVl, as far as I understand, the UF is deployed on each of the linux data source and uses monitor stanza/inputs.conf to forward events. So, connection_host param should be able to help.
Except that connection_host is not available for monitor inputs, only for network inputs.
I notice the sending IP of the UF is being logged under _internal as sourceHost.... Any more ideas to capture that data and ensure its available in index=os?