Hi all, i am new to splunk and face with a scenario.
We have install a forwarder on 1 of our production solaris device, intending to send logs to Splunk for monitoring purpose. On the Forwarder Management, we can see that this solaris host is phoning home, which suggest that its connecting to the Splunk.
App: Host is deployed to the Sol-prodweb app, which have an inputs.conf. This content inside list the path that suppose to be monitor, eg:
[monitor:///var/adm/authlog]
index=sol-prodweb
sourcetype=linux_secure
disable=0
We have also verify that the indexes page have the sol-prodweb detail and that the //var/adm/authlog path does have logs inside.
Server class: we also have create a sol-prodweb , with this device IP inside the whitelist .
Any important settings or steps that we miss out cause we are still unable to see any logs or data from this particular host at all?
Did you check check if the UF service has permissions to read the log file (the user that UF runs as).
As @gcusello suggested, if you are receiving internal logs from this UF, you should see permission errors in the logs.
Let me check on the permission and the user use to run splunk forwarder.
I have just checked and we do have only read permission for that path.
However, searching for _Internal Index and that particular host find no results at all.
What can be our next step?
Hi @johnlzy0408,
let me understand:
some stupid questions:
If you're not receiving logs from other UFs, this means that you have to enable receiving on Indexer and disable local firewall on it.
if you haven't internal logs, this means that there's a connection problem to analyze:
The first two questions are to understand ow arrived inpus.conf on UF.
Ciao.
Giuseppe
Hi Giuseppe,
Thanks for the reply.
Ans: we did it manually
Yes we are receiving from other UF, receiver has been enabled in indexer and can receive other logs like windows.
Does solaris require any other special settings to do? Cause its just this particular index and host not sending even though settings is the same
Hi @johnlzy0408,
Solaris need a special configuration for scripts (e.g. if you have to deploy the Splunk TA for Unix and Linux) that you can find solved in Community, these shouldn't be special problems on log reading, are you receiving Splunk internal logs from that UF?
Ciao.
Giuseppe
I don't think that we are even receiving any logs from that UF?
But from the Splunk forwarder management it did show connection. Any other way to check why not even internal logs are going through?
Hi @johnlzy0408,
you can check Splunk internal logs with a simple search:
index=_internal host=your_solaris_hostname
if you have results to this search means that theconnection is established and we have to debug the input phase.
If you haven't results, we have to debug connection.
Ciao.
Giuseppe
i have did a search with index and host name but no results can be found.
So how can we debug connection? Cause i can see this host in the forwarder management so i though connection is no problem.
do we check whether firewall is allow? check if splunk forwarder forwarding port is open by using below command
netstat -an | grep 9997
anything else?
Hi @johnlzy0408,
the first test is using telnet from UF:
telenet ip_indexer 9997
If you cannot connect on Indexer on port 9997 you could have the following issues:
I can help you to check only the last issue: could you share your outputs.conf?
try to use the outputs.conf of another UF that's running.
Ciao.
Giuseppe