I am taking over (temporarily) as the Splunk admin at my work. I am trying to figure out how things were setup for Splunk environment. There is a splunk server which I can access by console and by Web portal. There are no forwarding or receiving configured. There's a second server that all logs are being sent to. The second server I'm told is just a syslog server and storage (data in raw form). I am told to work off of the serverclass.conf and inputs.conf for all my app/log needs - which I know how to use. What I want to know is, where or how do I go about figuring out how these 2 servers are working and interacting? There's a chance we will want to remove that second server and have logs go directly to Splunk.
As an admin, you can go the setup pages to see if forwarding or receiving are turned on. Also, from the _internal index, you should be able to see entries for which server is forwarding data. You should find entries for this in the splunkd.log, although I don't recall the actual text. Also, the metrics.log will tell you which server is doing the indexing and forwarding.
Splunk Enterprise 6.2.x would also give you details in the "Distributed Management Console". I call it a mini-S.o.S app embedded as part of the default package (though the actual S.o.S. app
https://apps.splunk.com/app/748/ does much more).
All the best for the new role and Happy Splunking !!
PS: Let us know how your experience as a Splunk Admin has been.
The fastest way I can think to check is to use the
btool command on any Splunk Enterprise instance you find against the
outputs.conf files to confirm basic forwarding.
For example: On the Splunk instance with the UI
1. At the CLI, run
. /splunk btool --debug inputs list | grep splunktcp and review to see which
inputs.conf files under
/local have defined a
[splunktcp://port] stanza defined for inbound forwarder comms. You might also grep again to check for the existence of
udp port definitions if you believe the syslog server may not use a Splunk Forwarder to communicate.
2. If there is a
[splunktcp://port] you can check the details of the forwarders by searching the _internal index. Details at the post here.
3. If there are only
[udp://port] stanzas defined, implying the syslog server may forward a copy of the data to the Splunk instance, you can search Splunk for that data using the port as the source: ex.
The same goes for outputs.conf
At the CLI, run
. /splunk btool --debug outputs list | grep -C1 server and review to see which
outputs.conf files under
/local have defined a
server=splunkhost:port stanza defined for outbound comms.
Thanks much for the step-by-step. I guess we may not really know what the heck this second server is. It exists, it has a univ.forwarder installed and is sending logs like the other 200 servers that I found in the forwarder management as well as the _internal index.
I'm being told that it took the spot of an old RSA Envision server and is storing syslogs without Splunk touching the log format. I found an old instruction that there is a directory on that server that Splunk was told to monitor. This might be outdated? It seem to me all the servers in our environment is sending directly to Splunk since a) they are found in forwarder management, b) are listening 'splunk:9997' on servers, and c) I haven't had to do anything yet on this mystery syslog server. Does this sound about right?
Absolutely. Having a syslog server listen and throw logs down onto disk independent of a Splunk Forwarder is already a Splunk architecture best practice. The second piece is having a Splunk Forwarder tail those logs on disk and send them off. This syslog/Splunk combo is rock solid, and is the standard when architecting HA options for data delivered via syslog.