Getting Data In

Why is syslog right into Splunk so bad/wrong?

dshpritz
SplunkTrust
SplunkTrust

A lot of people just starting with Splunk will send data right to a Splunk input on a UDP port (udp:514). Splunk "Best Practices" dictate that there should be another syslog receiver in front of Splunk and then Splunk should tail the log files generated by that syslog application. But why?

1 Solution

dshpritz
SplunkTrust
SplunkTrust

One reason is that UDP is stateless. That is, it is fire-and-forget, so the system sending the syslog message will just send off the message, and doesn't care if the message is received or not. This means that if the UDP listener isn't there, events get dropped, which is no good. Splunk can be something of a lumbering giant during startup and shutdown, so if the Splunk system receiving the syslog directly is being restarted, there could be a long period where syslog messages are dropped. Most syslog daemons, such as syslog-ng or rsyslogd, restart very quickly, and will usually not miss a beat.

By having Splunk follow files (one of the things it does best on inputs) you have the disk as a buffer to receive events that might otherwise get dropped. This means that you can restart Splunk, and it will pick up with the events right where it left off.

View solution in original post

starcher
SplunkTrust
SplunkTrust

I posted my thoughts on this whole topic of syslog and Splunk here: http://www.georgestarcher.com/splunk-success-with-syslog/

mbreitbach
New Member

I have approached this slightly differently. I manage a network spread across a large area of geography serviced via satellite links, which can often be down for long periods of time. I am using the universal forwarder at each remote site to collect syslog via UDP and forward via a tcp connection. This gives me several advantages:

  • Compression. This cuts my bandwidth usage for reporting in half or better--very important when bandwidth costs are in the thousand of dollars per Mb/Month.
  • Encryption. Keeps my data secure in transit.
  • Queueing. Universal forwarder will queue days worth of syslog data.

The universal forwarder is easy to setup, and once it is running we never have to restart it for any reason. This worked so well for our remote sites that we actually setup a forwarder in our data centres just to buffer syslog data and forward it to our main splunk server.

0 Karma

donald_mccarthy
Explorer

I have done it both ways in the past. I use syslog-ng collector as my default for multiple reasons. When there is a legal obligation to keep syslogs for a certain amount of time, it is cheaper for me to gzip them on a syslog collector and move them off to tape in batch operations. This also makes legal feel warm and fuzzy because they are "more pure" in some legal opinions.

I have also had better experiences when using a syslog collector for areas that are geographically remote and do not have an indexer on-site.

Chubbybunny
Splunk Employee
Splunk Employee

Thank you!

0 Karma

dshpritz
SplunkTrust
SplunkTrust

One reason is that UDP is stateless. That is, it is fire-and-forget, so the system sending the syslog message will just send off the message, and doesn't care if the message is received or not. This means that if the UDP listener isn't there, events get dropped, which is no good. Splunk can be something of a lumbering giant during startup and shutdown, so if the Splunk system receiving the syslog directly is being restarted, there could be a long period where syslog messages are dropped. Most syslog daemons, such as syslog-ng or rsyslogd, restart very quickly, and will usually not miss a beat.

By having Splunk follow files (one of the things it does best on inputs) you have the disk as a buffer to receive events that might otherwise get dropped. This means that you can restart Splunk, and it will pick up with the events right where it left off.

dshpritz
SplunkTrust
SplunkTrust

Yes, you should have a syslog system in front of Splunk. The syslog will write the events to a file, and then you have Splunk tail that file.

0 Karma

molinarf
Communicator

I am just starting to use Splunk and have my network devices logging to udp port 514. Based on what I am reading, it is not a good practice because of the nature of udp and the issues with dropping data if and when Splunk is restarted. Is best practice then having what is essentially a front end server to collect and then send to Splunk?

0 Karma

alacercogitatus
SplunkTrust
SplunkTrust

Listen, and I shall bring you much merriment and wonder! 25 score and 14 ports ago, an unreliable protocol of the User Datagram realm visited upon the land of Splunktonia. Now, while Splunktonia did listen to the User Datagram visitor, it all sounded the same, for their languages twain shouldn't meet. All from the same place and type, the information being sent was. And then! Oh, the horror, when Splunktonia stopped listening! So then the information critical to war was lost! "Never again!" shouted the King. Upon these words a collector of sorts sprang from the bloody mud, and started to translate the incoming messages, for he spake many languages. He organized them based on their source, and type, and thusly presented them to the King of Splunktonia. For now it made sense! Oh, Bravo! So easy to disseminate information! and oh the categories! And now, when the King no longer listens, the collector shall, and nary a piece of information shall be lost.

Original
It's a matter of availability. If Splunk is listening on 514, and you have to restart Splunk, what happens to the listener? It goes away. So the entire time Splunk is down, you will lose events, especially UDP events. TCP may be able to recover if the application sending them notices the ACK not working. It also allows you to index the files with more granularity. Instead of everything as sourcetype=udp:514 host=MySplunkIndexer, you can dump the syslogs to files, and parse the hosts from there and set sourcetypes as needed.

Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...