All Apps and Add-ons

Splunk for Snort: How to configure a universal forwarder to monitor Snort data for indexing?

w0lverineNOP
Path Finder

I have successfully installed my universal forwarder and has a connection to Splunk. Though I am getting data (not sure if its my snort logs) in source=_internal with a host = bss (which is my host name for my Splunk forwarder) but Splunk for Snort is not indexing the data. Any help on how to properly configure a universal forwarder to send data to the correct index for Splunk for Snort would help!

I configured my forwarder inputs.conf to the following:

[default] host = bss
[monitor:///var/log/snort/snort.log.*]
disabled = false
sourcetype = snort_alert_full
source = snort

I configured my forwarder outputs.conf to the following:

[tcpout] defaultGroup =
default-auto1b-group

[tcpout:default-auto1b-group] server = 10.10.20.103:997

[tcpout-server://10.10.20.103:997]

Than I have configured my Splunk's inputs.conf to the following:

[default] host = Splunk
[splunktcp://:9997]
connection_host = bss # host_name for my forwarder
sourcetype = snort_alert_full source =
tcp:9997

disabled = 0

Splunk Web GUI:
--I have set snort's index to: snort_alert
--I have set snorts source type to: snort

And my forwarder is monitoring the correct files in snort, based of the cmd: ./splunk list monitor

Monitored Files:
$SPLUNK_HOME/etc/splunk.version
/var/log/snort/snort.log.*
/var/log/snort/snort.log.1453951439

Not sure what I am doing wrong, let me know if you need anymore information to find out how I can configure my universal forwarder to send to the correct index so my Splunk for Snort app can index it?

0 Karma

renjith_nair
SplunkTrust
SplunkTrust

In your forwarder outputs.conf is there a typo for the port (9997 instead of 997) ?

By mentioning Splunk's inputs.conf hope it's at receiver(indexer) . If that's the case you don't need to mention sourcetype and source. Please see below for your reference.

At forwarder side
Inputs.conf

[default] 
host = bss

[monitor:///var/log/snort/snort.log.*]
index=snort_alert
disabled = false
sourcetype = snort_alert_full
source = snort

Outputs.conf

 [tcpout] 
defaultGroup =default-auto1b-group

[tcpout:default-auto1b-group] 
server = 10.10.20.103:9997

At receiver / indexer side

    [default] 
   host = Splunk # it should be your receiver host name

    [splunktcp://:9997]
    connection_host = bss # host_name for my forwarder [this is optional]
Happy Splunking!
0 Karma

umang_solanki
New Member

Can someone please explain what is the need of configuring inputs.conf at receiver side (indexer) if the receiver port (9997) is already configured (May be through GUI) ?

0 Karma

w0lverineNOP
Path Finder

That was a typo. And yes splunk's inputs.conf is the receiver(indexer). Still unsuccessful after restarting both clients. I get data indexed still from the _internal . But it's only from the metrics.log file on the forwarder

host =bss         source=/opt/splunkforwarder/var/log/splunk/metrics.log           sourcetype = splunkd

My Forwarder Health App also sees the forwarder client but no "data coming into Splunk (Not Internal)". I have restarted my splunk, redownload splunk for snort. But I am getting this _internal error:

index=_internal NOT CASE(TcpOutputProc) source!=*metrics.log NOT (INFO DeployedServerclass) NOT (INFO DC:UpdateServerclassHandler) host=bss  _raw="01-29-2016 05:43:48.949 -0600 ERROR TcpOutputFd - Connection to host=10.10.20.103:9997 failed" | cluster showcount=t | search cluster_count=239

My interpretation: It failed to connect to the forwarder but as I use netstat -an | grep 10.10.20.103 (IP address of indexer) I do get an established connection and my nmap shows the port is open on the indexer. I also have no router or firewall between them. Maybe the information I gave you above from the _internal index will help you.

0 Karma

renjith_nair
SplunkTrust
SplunkTrust

Is your splunk enabled with ssl ? try telnet 10.10.20.103 9997 from forwarder and see if the connection is establlished

Happy Splunking!
0 Karma
Get Updates on the Splunk Community!

Index This | Why do they call it hyper text?

November 2023 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

State of Splunk Careers 2023: Career Resilience and the Continued Value of Splunk

For the past three years, Splunk has partnered with Enterprise Strategy Group to conduct a survey that gauges ...

The Great Resilience Quest: 9th Leaderboard Update

The ninth leaderboard update (11.9-11.22) for The Great Resilience Quest is out >> Kudos to all the ...