- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Data Onboarding
Hi all, i am new to splunk and face with a scenario.
We have install a forwarder on 1 of our production solaris device, intending to send logs to Splunk for monitoring purpose. On the Forwarder Management, we can see that this solaris host is phoning home, which suggest that its connecting to the Splunk.
App: Host is deployed to the Sol-prodweb app, which have an inputs.conf. This content inside list the path that suppose to be monitor, eg:
[monitor:///var/adm/authlog]
index=sol-prodweb
sourcetype=linux_secure
disable=0
We have also verify that the indexes page have the sol-prodweb detail and that the //var/adm/authlog path does have logs inside.
Server class: we also have create a sol-prodweb , with this device IP inside the whitelist .
Any important settings or steps that we miss out cause we are still unable to see any logs or data from this particular host at all?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Did you check check if the UF service has permissions to read the log file (the user that UF runs as).
As @gcusello suggested, if you are receiving internal logs from this UF, you should see permission errors in the logs.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Let me check on the permission and the user use to run splunk forwarder.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have just checked and we do have only read permission for that path.
However, searching for _Internal Index and that particular host find no results at all.
What can be our next step?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content


Hi @johnlzy0408,
let me understand:
- you installed UF on Solaris,
- you deployed TAs on UF,
- you're not receiving logs on Indexer from that UF.
some stupid questions:
- how do you deployed TAs on UF: Manually or by Deployment Server?
- if manually, did you restarted Splunk on UF?
- did you have Splunk internal logs from that UF on Indexes?
- are you receiving logs from other UFs on that Indexer?
If you're not receiving logs from other UFs, this means that you have to enable receiving on Indexer and disable local firewall on it.
if you haven't internal logs, this means that there's a connection problem to analyze:
- intermediate firewalls,
- use "telnet ip_indexer 9997" to check this.
The first two questions are to understand ow arrived inpus.conf on UF.
Ciao.
Giuseppe
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Giuseppe,
Thanks for the reply.
- how do you deployed TAs on UF: Manually or by Deployment Server?
Ans: we did it manually
- are you receiving logs from other UFs on that Indexer?
Yes we are receiving from other UF, receiver has been enabled in indexer and can receive other logs like windows.
Does solaris require any other special settings to do? Cause its just this particular index and host not sending even though settings is the same
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content


Hi @johnlzy0408,
Solaris need a special configuration for scripts (e.g. if you have to deploy the Splunk TA for Unix and Linux) that you can find solved in Community, these shouldn't be special problems on log reading, are you receiving Splunk internal logs from that UF?
Ciao.
Giuseppe
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I don't think that we are even receiving any logs from that UF?
But from the Splunk forwarder management it did show connection. Any other way to check why not even internal logs are going through?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content


Hi @johnlzy0408,
you can check Splunk internal logs with a simple search:
index=_internal host=your_solaris_hostname
if you have results to this search means that theconnection is established and we have to debug the input phase.
If you haven't results, we have to debug connection.
Ciao.
Giuseppe
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
i have did a search with index and host name but no results can be found.
So how can we debug connection? Cause i can see this host in the forwarder management so i though connection is no problem.
do we check whether firewall is allow? check if splunk forwarder forwarding port is open by using below command
netstat -an | grep 9997
anything else?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content


Hi @johnlzy0408,
the first test is using telnet from UF:
telenet ip_indexer 9997
If you cannot connect on Indexer on port 9997 you could have the following issues:
- you didn't enabled receiving on Indexer, but, you said that you're receiving from other UFs, so this issue should be excluded;
- there's a local firewall on Indexer, , but, you said that you're receiving from other UFs, so this issue should be excluded;
- there's a Firewall between UF and Indexer;
- there's a local Firewall on UF,
- outputs.conf on UF isn't correct.
I can help you to check only the last issue: could you share your outputs.conf?
try to use the outputs.conf of another UF that's running.
Ciao.
Giuseppe
