Hello!
I am preparing for the architect exam and I have set the following lab:
10.37.129.10 spl-search-head
10.37.129.11 spl-deployment-server
10.37.129.12 spl-indexer1
10.37.129.13 spl-indexer2
10.37.129.14 spl-forwarder1
10.37.129.15 spl-forwarder2
10.37.129.16 spl-forwarder3
10.37.129.17 spl-forwarder4
10.37.129.18 Checkpoint GAIA R77.30
All forwarders talk to the deployment server and I have pushed an app named "sendtoindex" to the forwarders with the following /opt/splunk/etc/deployment-apps/sendtoindexer/default/outputs.conf:
[tcpout: my_LB_indexers]
server=10.37.129.12:9997,10.37.129.13:9997
compressed=true
forceTimebasedAutoLB=true
autoLBFrequency=40
useACK=true
Then, I configured Checkpoint to send SYSLOG UDP 514 to forwarder1 and pushed the app named "syslogcheckpoint" through deployment server to forwarder1 with the following /opt/splunk/etc/deployment-apps/syslogcheckpoint/default/inputs.conf:
[udp://10.37.129.18:514]
host=10.37.129.18
connection_host = ip
sourcetype=syslog
queueSize=900MB
persistentQueueSize=5GB
In forwarder1 I have enabled tcpdump and I see the logs are delivered to forwarder. Moreover, both indexer1 and indexer2 listen to ports 9997. If I run a search to indexers (e.g. indexer1) it seems that logs are delivered to indexer1:
Search: index="_internal" host="spl-forwarder1" syslog
11-06-2016 13:35:33.053 +0200 INFO Metrics - group=per_sourcetype_thruput, series="syslog", kbps=0.042025, eps=0.451624, kb=1.302734, ev=14, avg_age=0.000000, max_age=0
What is wrong in my configuration? Do I have to instruct indexers with a props.conf configuration? Why logs are not indexed although sent to indexers through port 9997?
Thank you in advance for your help!
I changed outputs.conf from
[tcpout: my_LB_indexers]
server=10.37.129.12:9997,10.37.129.13:9997
forceTimebasedAutoLB=true
autoLBFrequency=40
compressed=true
to:
[tcpout:my_LB_indexers]
server=10.37.129.12:9997,10.37.129.13:9997
forceTimebasedAutoLB=true
autoLBFrequency=40
compressed=true
(deleted space after "tcpout:") and seems that this typo was the issue!...
Still can't believe it; I am refreshing the page because I can't believe that the indexers are growing!...
yep, behaving as per your outputs.conf.
all is looking well there my friend.
So, lets take a step back.
If you do directly to one of your indexers and search for these logs....anything???
No, nothing. I try even "*" but nothing seems to have been indexed...
what is the output of this search, obviously changing your values accrodingly:
index=_internal source=*metrics.log host="<yourindexers>" group=per_sourcetype_thruput series="<yourSourcetype>"
Also if you go to settings > indexes .....is your main index growing???
Main index is 1MB for both indexers. 😞
Search output is for indexer1:
11/7/16
7:31:36.380 PM
11-07-2016 19:31:36.380 +0200 INFO Metrics - group=per_sourcetype_thruput, series="syslog", kbps=0.048009, eps=0.483869, kb=1.488281, ev=15, avg_age=0.000000, max_age=0
host = spl-forwarder1
source = /opt/splunkforwarder/var/log/splunk/metrics.log
sourcetype = splunkd
11/7/16
7:31:05.380 PM
11-07-2016 19:31:05.380 +0200 INFO Metrics - group=per_sourcetype_thruput, series="syslog", kbps=0.078911, eps=0.838695, kb=2.446289, ev=26, avg_age=0.000000, max_age=0
host = spl-forwarder1
source = /opt/splunkforwarder/var/log/splunk/metrics.log
sourcetype = splunkd
11/7/16
7:30:34.379 PM
11-07-2016 19:30:34.379 +0200 INFO Metrics - group=per_sourcetype_thruput, series="syslog", kbps=0.065022, eps=0.677440, kb=2.015625, ev=21, avg_age=0.000000, max_age=0
host = spl-forwarder1
source = /opt/splunkforwarder/var/log/splunk/metrics.log
sourcetype = splunkd
11/7/16
7:29:32.380 PM
11-07-2016 19:29:32.380 +0200 INFO Metrics - group=per_sourcetype_thruput, series="syslog", kbps=0.060460, eps=0.624364, kb=1.839844, ev=19, avg_age=0.000000, max_age=0
For indexer2:
11/7/16
7:32:38.380 PM
11-07-2016 19:32:38.380 +0200 INFO Metrics - group=per_sourcetype_thruput, series="syslog", kbps=0.058120, eps=0.580630, kb=1.801758, ev=18, avg_age=0.000000, max_age=0
host = spl-forwarder1
source = /opt/splunkforwarder/var/log/splunk/metrics.log
sourcetype = splunkd
11/7/16
7:32:07.380 PM
11-07-2016 19:32:07.380 +0200 INFO Metrics - group=per_sourcetype_thruput, series="syslog", kbps=0.048010, eps=0.483882, kb=1.488281, ev=15, avg_age=0.000000, max_age=0
host = spl-forwarder1
source = /opt/splunkforwarder/var/log/splunk/metrics.log
sourcetype = splunkd
11/7/16
7:30:03.380 PM
11-07-2016 19:30:03.380 +0200 INFO Metrics - group=per_sourcetype_thruput, series="syslog", kbps=0.050372, eps=0.516128, kb=1.561523, ev=16, avg_age=0.000000, max_age=0
It's started to get a little bit crazy... :S
Hmmm.....back to the drawing board...
How bout this...
Rebuild your inputs.conf to accept from any host, rather than specifying the sender. Let's see if wide open UDP changes anything....
I changed the recipient of the logs to forwarder2 and explicitly used the "/opt/splunkforwarder/etc/system/local/inputs.conf" to include:
[udp:514]
sourcetype = syslog
but still no luck on the indexers....
still working on this one?
lets look at btool before I go set it up in my lab. I have a very similar set up, but i catch the logs with rsyslog then tail the file. Try:
splunker@n00b-splkufwd-01:/opt/splunkforwarder/bin$ ./splunk btool inputs list udp --debug
/opt/splunkforwarder/etc/system/default/inputs.conf [udp]
/opt/splunkforwarder/etc/system/default/inputs.conf _rcvbuf = 1572864
/opt/splunkforwarder/etc/system/default/inputs.conf connection_host = ip
/opt/splunkforwarder/etc/system/local/inputs.conf host = n00b-splkufwd-01
/opt/splunkforwarder/etc/system/default/inputs.conf index = default
wow..cant give up now! MUST. KNOW.WHY
lets check btool
./splunk btool inputs list --debug
im going to set it up in my lab. i use rsyslog to write to disk, but lemme set up udp as well
"Connection to host=10.37.129.12:9997 failed"
"Connect to 10.37.129.12:9997 failed. Connection refused"
"Connection to host=10.37.129.13:9997 failed"
"Connect to 10.37.129.13:9997 failed. Connection refused"
"Applying quarantine to ip=10.37.129.13 port=9997 _numberOfFailures=2"
"Applying quarantine to ip=10.37.129.12 port=9997 _numberOfFailures=2"
Can you telnet to your indexers from the forwarders on 9997? whats the timestamp on them things?
splunker@n00b-splkufwd-01:/opt/splunkforwarder/var/log/splunk$ cat splunkd.log | grep TcpOutputProc
Hey Andresito123!
Nice Lab setup!
The experience of working through these items will serve you well in the exam and beyond!
Your config looks good, and the fact that _internal logs are making it to the indexers means your forwarding/receiving setup looks good! Searching index=_internal
and making sure all your hosts are present is a great place to start all your forwarder troubleshooting. How about searching index=_internal host=<yourforwarder> error OR warn
anything interesting?
Splunk indexers have syslog as a default props, so you should be good there.
Now, Lets work from the forwarder and see what we can discover:
I noticed that your input lacks an index. Are you just trying to send to default index? out of the box, that would be the 'main' index. if you search index=main
over all time....you definitely aren't receiving?
The forwarder has some really great debug commands. Try these from /opt/splunkforwarder/bin:
./splunk list foraward-server
this will confirm your active forwards (you already did this by checking _internal, but figured I'd share anyhow as it is very useful)
./splunk list inputstatus
You should see your UDP input there...how does it look?
Can I assume you are running the forwarder as root? You would need root to listen on ports lower than 1024.
How about the output of netstat -tulpn
on your forwarder ( I assume you are on *nix)? Is splunkd listening on 514?
When you pushed the app, did you configure it to restart the forwarder? In the Deployment Server, it should show, "after installation - Enable app, restart splunkd". Have you tried restarting the forwarder already manually?
If you check all these and still not seeing anything, lemme know and we'll move along...