I have successfully installed my universal forwarder and has a connection to Splunk. Though I am getting data (not sure if its my snort logs) in source=_internal with a host = bss (which is my host name for my Splunk forwarder) but Splunk for Snort is not indexing the data. Any help on how to properly configure a universal forwarder to send data to the correct index for Splunk for Snort would help!
I configured my forwarder inputs.conf to the following:
[default] host = bss
disabled = false
sourcetype = snortalertfull
source = snort
I configured my forwarder outputs.conf to the following:
[tcpout] defaultGroup =
[tcpout:default-auto1b-group] server = 10.10.20.103:997
Than I have configured my Splunk's inputs.conf to the following:
[default] host = Splunk
connectionhost = bss # hostname for my forwarder
sourcetype = snortalertfull source =
disabled = 0
Splunk Web GUI:
--I have set snort's index to: snort_alert
--I have set snorts source type to: snort
And my forwarder is monitoring the correct files in snort, based of the cmd: ./splunk list monitor
Not sure what I am doing wrong, let me know if you need anymore information to find out how I can configure my universal forwarder to send to the correct index so my Splunk for Snort app can index it?
In your forwarder outputs.conf is there a typo for the port (9997 instead of 997) ?
By mentioning Splunk's inputs.conf hope it's at receiver(indexer) . If that's the case you don't need to mention sourcetype and source. Please see below for your reference.
At forwarder side
[default] host = bss [monitor:///var/log/snort/snort.log.*] index=snort_alert disabled = false sourcetype = snort_alert_full source = snort
[tcpout] defaultGroup =default-auto1b-group [tcpout:default-auto1b-group] server = 10.10.20.103:9997
At receiver / indexer side
[default] host = Splunk # it should be your receiver host name [splunktcp://:9997] connection_host = bss # host_name for my forwarder [this is optional]
That was a typo. And yes splunk's inputs.conf is the receiver(indexer). Still unsuccessful after restarting both clients. I get data indexed still from the _internal . But it's only from the metrics.log file on the forwarder
host =bss source=/opt/splunkforwarder/var/log/splunk/metrics.log sourcetype = splunkd
My Forwarder Health App also sees the forwarder client but no "data coming into Splunk (Not Internal)". I have restarted my splunk, redownload splunk for snort. But I am getting this _internal error:
index=_internal NOT CASE(TcpOutputProc) source!=*metrics.log NOT (INFO DeployedServerclass) NOT (INFO DC:UpdateServerclassHandler) host=bss _raw="01-29-2016 05:43:48.949 -0600 ERROR TcpOutputFd - Connection to host=10.10.20.103:9997 failed" | cluster showcount=t | search cluster_count=239
My interpretation: It failed to connect to the forwarder but as I use
netstat -an | grep 10.10.20.103 (IP address of indexer) I do get an established connection and my nmap shows the port is open on the indexer. I also have no router or firewall between them. Maybe the information I gave you above from the _internal index will help you.
Is your splunk enabled with ssl ? try
telnet 10.10.20.103 9997 from forwarder and see if the connection is establlished
Can someone please explain what is the need of configuring inputs.conf at receiver side (indexer) if the receiver port (9997) is already configured (May be through GUI) ?