Getting Data In

Forwarder failing to forward SNMP traps captured on UDP input

mloven
Path Finder

Hi all!

Ok, so here's my situation.

All Splunk software listed below is v4.3.

I've installed a forwarder on a linux server using the instructions found here. On the forwarder, I'm listening for snmp traps on port 163/udp. If I run a tcpdump on the forwarder, I see the traps coming in just fine.

On the receiver (also a linux server) I can run a tcpdump on port 9997 and I see some messages coming in every 20 seconds or so. I assume these are the heartbeats. None of the trap messages are being forwarded. Searching in the Splunk UI shows me no messages and no data sources.

My /opt/splunkforwarder/etc/apps/search/default/inputs.conf has this stanza only:

[udp://localhost:163]
index = main

Searching for

index=_internal host=myforwarderhostname

doesn't show me anything that looks like an error.

I've checked all of the logs in /opt/splunkforwarder/var/log/splunk/ and don't see anything in there that looks like an error either.

I'm really just not sure where to look at this point.

Any suggestions?

1 Solution

dwaddle
SplunkTrust
SplunkTrust

I would never expect this to work AT ALL. Splunk does not speak SNMP, and does not understand the ASN.1 format of an SNMP trap. You need an intermediary like NET-SNMP's snmptrapd to help with this. Let snmptrapd listen on udp/163, then write those (formatted as text) traps to a file which Splunk does understand.

This approach is well-covered in http://docs.splunk.com/Documentation/Splunk/latest/Data/SendSNMPeventstoSplunk

View solution in original post

dwaddle
SplunkTrust
SplunkTrust

I would never expect this to work AT ALL. Splunk does not speak SNMP, and does not understand the ASN.1 format of an SNMP trap. You need an intermediary like NET-SNMP's snmptrapd to help with this. Let snmptrapd listen on udp/163, then write those (formatted as text) traps to a file which Splunk does understand.

This approach is well-covered in http://docs.splunk.com/Documentation/Splunk/latest/Data/SendSNMPeventstoSplunk

mloven
Path Finder

dwaddle - thanks for pointing me in the right direction with the snmptrap doc. While it didn't resolve the underlying issue that I had, it was a crucial first step that I had missed.

mloven
Path Finder

Ok... so I'm pretty sure this is solved.

I ended up recreating the vm that the indexer was installed on (for a different reason, not because of the splunk issue), and after splunk reinstallation, I set it to receive on port 9997 and restarted everything. It worked fine the first time.

Not sure what the issue was on the other instance, but whatever it was, the issue seems to be resolved now.

Thanks everyone for your help!

Mike

mloven
Path Finder

Sorry to blatantly bump this, but...

bump.

If I'm missing any vital info that would help you guys determine what the problem is, don't hesitate to tell me...

Thanks.

0 Karma

mloven
Path Finder

And it should be noted that traps are coming in to the device and being written to the log file at a rate of ~30 a minute.

Anyone have any idea what I'm missing?

0 Karma

mloven
Path Finder

Ok, so I'm thoroughly confused now...

On the forwarder, I can tail the log file that splunk is supposed to monitor and see the traps coming in. But if I do a tcpdump on the forwarder, the events that are being sent out don't match what's in the log file. They look like they're heartbeats or something, but not actual traps. They contain the name of the forwarder, and the log file that is supposed to be monitored, but none of the traps from the file.

That said, there are some traps that have come through. If I look back over the last 24 hours in search, I can see several hundred traps.

0 Karma

mloven
Path Finder

Sorry for how long this took to reply back... I got tied up with other things.

Here's where the issue stands now:

I've followed the information in the link that dwaddle provided. Extremely helpful. I've now got snmptrapd intercepting the traps and writing them to a file. I then have Splunkforwarder watching that file.

Unfortunately, I'm still having a weird issue. The traps are constantly coming in, but they are all showing up as coming in at the same time. So, if I look back at the last hour, I see no traps, but if I look at 2pm-3pm, I see, say 112 events. then, if I refresh, that number will increase, but just at the same time.

Thoughts?

0 Karma

hexx
Splunk Employee
Splunk Employee

Ok so the problem definitely is with getting the SNMP traps into Splunk using the UDP input on port 163.

Has this ever worked? Do you see any errors in splunkd.log from channel UDPInputProcessor there? It might be interesting to set that channel to DEBUG in $SPLUNK_HOME/etc/log.cfg and to restart Splunk to see if anything shows up.

0 Karma

mloven
Path Finder

Oh, I forgot about adding a local file. I did a splunk add oneshot /var/log/messages and that seemed to add fine. I can see that data in the ui.

0 Karma

mloven
Path Finder

My outputs.conf (at /opt/splunkforwarder/etc/local/) is:

[tcpout]
disabled=false
defaultGroup = 10.43.29.212_9997

[tcpout:10.43.29.212_9997]
server = 10.43.29.212:9997

[tcpout-server://10.43.29.212:9997]

And that is the correct ip and port.

The snmp traps are being sent by another app on the same server. I've got it configured to send out traps on port 163 (because the app itself is already listening for traps on port 162). And again, I can see those traps coming in if I do a tcpdump on the loopback interface on port 163, so I'm pretty sure that the traps are coming in fine.

That outputs.conf is the stock file that was created after I added the forward-server. I did add the "disabled=false" line in a fit of troubleshooting based on an answer from another question on this board. It didn't seem to change anything.

0 Karma

hexx
Splunk Employee
Splunk Employee

I would advise to split this in the middle : Figure out if this is a forwarder-to-indexer issue or an inputs issue. To do this, you could simply use "splunk add oneshot" on the forwarder to index a local file and check that it makes it to the indexer. If that is the case, then you know your problem is with your SNMP trap input. What exactly are you using to get those traps indexed as events? Who sends the trap as a UDP stream to port 163?

0 Karma

sfleming
Splunk Employee
Splunk Employee

have you defined outputs.conf on the forwarder?

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...