Getting Data In

How do I use syslogNG to replace Splunk TCP or UDP inputs?

gjanders
SplunkTrust
SplunkTrust

This is a question I have the answer to, I'm posting this answer because I spent a number of hours attempting to understand how to use the syslogNG system to replace the standard TCP / UDP listeners on Splunk.

Furthermore I found the syslogNG manual was less intuitive than I would have liked and the Splunk blog posts did not provide enough examples in my opinion.

Please see the answer below for an example configuration.

0 Karma
1 Solution

gjanders
SplunkTrust
SplunkTrust

After reading a few Splunk blog posts such as the High Performance Syslogging For Splunk Using syslog-ng , part 2 is here

I found that I was still confused on the process to get syslogNG running.

Here are some notes I have from getting syslogNG running on a Redhat Linux server:

Configuring the main syslogNG configuration file (syslog-ng.conf)

/etc/syslog-ng
Should contain:

@include "/etc/syslog-ng/buckets.d"

I believe Redhat installations have this by default, there is also a line mentioning:

# Source additional configuration files (.conf extension only)
@include "/etc/syslog-ng/conf.d/*.conf"

Additional custom configuration to ensure the Splunk user can read the created log files/directories

My first configuration change was to create a common configuration file in the conf.d directory:
/etc/syslog-ng/conf.d/splunk.conf (or similar)

options {
    create_dirs (yes);
    dir-group(splunk);
    dir_perm(0770);
    group(splunk);
    perm(0770);
};

I was using the user "splunk" to run the heavy forwarder, a universal forwarder would also work as the primary purpose was to read the files created by the syslogNG listeners.

Example TCP, TCP + SSL and UDP listeners

Here's an example TCP listener, this file was created:
/etc/syslog-ng/buckets.d/tcp8100

source s_tcp8100 {  tcp(port(8100) max-connections(100)); };
destination d_tcp8100 { file ("/var/log/syslogngfeeds/datapower/datapower_${HOUR}.log");
};
log {source(s_tcp8100); destination(d_tcp8100);
};

Due to the size of the logs I chose to create a new log file every hour, you can also use other macros for a per-day log file or similar, furthermore information is available in the syslogNG manual.

Some servers supported TCP + SSL, in syslogNG you can configure the SSL settings per listener port which is quite useful.

I created the file:
tcpssl8310

source s_tcpssl8310 {  tcp(port(8310) max-connections(400) tls(key_file("/etc/syslog-ng/SSL/splunk-indexer.key") cert_file("/etc/syslog-ng/SSL/splunk-indexer.cer") ca_dir("/etc/syslog-ng/SSL/CACertificate.pem") peer-verify(optional-untrusted) ) ); };
destination d_tcpssl8310 { file ("/var/log/syslogngfeeds/prd_X/X_${HOUR}.log");
};
log {source(s_tcpssl8310); destination(d_tcpssl8310);
};

Note that I could only get the above configuration working on RHEL7, RHEL6 did not work with SSL at all.
Furthermore I found that if I did not have appropriate read permissions on the certificate/key file the syslogNG would crash if I attempted to access to the SSL port (it would work fine until a client attempted to talk to use the SSL port).

FInally, if you want SSLv3 disabled, in newer syslogNG instances you can disable SSLv3, due to RHEL7 using version 3.5.x of syslogNG you must use the cipher suite to control SSL versions:

cipher_suite("ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:+HIGH:!SSLv2:!SSLv3")

This disables everything excluding TLS1.2 so use this carefully!
ssl-options() is available in syslogNG 3.7 or newer to disable SSLv3 or other chosen protocols.

FInally, a number of devices/servers only support udp traffic, here is an example udp file:
udp9100

source s_udp9100 {  udp(port(9100)); };
destination d_udp9100 { file ("/var/log/syslogngfeeds/solarwinds/solarwinds_${HOUR}.log");
};
log {source(s_udp9100); destination(d_udp9100);
};

Customising the file output from syslogNG

The macro of ${HOUR} can be used with something else, it adds a 00, 01, 20 or similar hour number to each file, combined with a find -type f +mmin and an -exec with an rm command to rotate logs every X hours.

Note that the default output is :

Sep  7 18:34:25 10.139.130.134 Wed Sep 7 18:34:25 AEST 2016

Where in the above case the "Wed Sep 7 18:34:25" was coming from the source system, not from syslogNG.

I had some cases where I had to disable the parsing of log data because syslogNG was rejecting the data, for this you can use falgs(no-parse), Splunk TCP ports never rejected data in my testing:

source s_tcpssl8310 {  tcp(port(8310) max-connections(600) tls(key_file("/etc/syslog-ng/SSL/x.key") cert_file("/etc/syslog-ng/SSL/x.cer") ca_dir("/etc/syslog-ng/SSL/CACertificate.pem") peer-verify(optional-untrusted)) flags(no-parse)); };
destination d_tcpssl8310 { file ("/var/log/syslogngfeeds/x_${HOUR}.log" template("$MSGONLY\n"));
};
log {source(s_tcpssl8310); destination(d_tcpssl8310);
};

In the above example I've also changed the template settings to remove the date/time data from the output file, the template string $MSGONLY is just the message, $HOST can be used, the default settings are:

template("$ISODATE $HOST $MSGHDR$MSG\n");

Testing the syslogNG inputs

For testing purposes I did this for UDP testing:

echo -n `date` | nc -u -w1 serverName udpPortNumber

Note that I had to have netcat installed, even the nmap compatible version available on RHEL 7 worked fine.

TCP ports can be tested via telnet:

telnet <syslogNG server> <TCP port>

Typing anything in whilst connected should end up in the syslogNG destination file.

SSL/TLS connections can be tested via a web browser, if you send your firefox/chrome or IE to the syslogNG server:port number, it should hang while loading a blank page, however at the same time you should see some output in the destination log file of syslogNG.

Reading the log files back into Splunk

I hardcoded the source as syslog: and the sourcetype as the appropriate type I was expecting on the required port number of syslogNG, otherwise I used wildcards to pickup the per-hour log files.

Cleaning up the log files

I used crontab:
10 * * * * find /var/log/syslogngfeeds -type f -mmin +241 -exec rm '{}' \;

But this can be customised to your individual needs, since the previous solution was streaming via Splunk I decided that having even a few hours of log files on disk provides enough time for any outages to the Splunk heavy forwarders running on the same server.

You could choose to keep the log files for days if you have appropriate disk space, or only roll them weekly if you prefer!

Any feedback is welcome...

Monitoring

SA-syslog_collection provides a script to gather stats on the UDP interface:

Furthermore the syslogNG on RHEL servers dumps log statistics into the /var/log/messages file by default to advise how many events were processed for each source/destination, either can be used for monitoring syslogNG via Splunk...

View solution in original post

frobert
New Member

Hi garethatiag, I've seen that you already solved the problem, but would be interested in what information was missing from the syslog-ng manual, or what problems/difficulties you had with it. Were you missing Splunk-specific information, or something more general? Your feedback would be greatly appreciated, to know what we could improve in the syslog-ng documentation.
Kind Regards,

Robert

0 Karma

gjanders
SplunkTrust
SplunkTrust

So I found a few different sources of information for syslogNG, there is of course the offical documentation on the balabit.com website, here is version 3.8

However I found this documentation challenging to read, and I made quite a few errors while trying to simply create a TCP listener that would replace by Splunk TCP listener.

For example if you refer to "Using Syslog-ng with Splunk" it advises that "A Splunk instance can listen on any port for incoming syslog messages. While this is easy to configure, it’s not considered best practice for getting syslog messages into Splunk."

And it provides a nice UDP example which is very simple, but no TCP or TCP SSL examples, otherwise it is quite a good guide. I initally used the high performance syslog guide which has much less information than the other blog post i just found...

Also since I did not have root access I was creating the files under buckets.d and using individual configuration files which keeps the main config file as very simple...

0 Karma

frobert
New Member

Thanks a lot!

0 Karma

gjanders
SplunkTrust
SplunkTrust

After reading a few Splunk blog posts such as the High Performance Syslogging For Splunk Using syslog-ng , part 2 is here

I found that I was still confused on the process to get syslogNG running.

Here are some notes I have from getting syslogNG running on a Redhat Linux server:

Configuring the main syslogNG configuration file (syslog-ng.conf)

/etc/syslog-ng
Should contain:

@include "/etc/syslog-ng/buckets.d"

I believe Redhat installations have this by default, there is also a line mentioning:

# Source additional configuration files (.conf extension only)
@include "/etc/syslog-ng/conf.d/*.conf"

Additional custom configuration to ensure the Splunk user can read the created log files/directories

My first configuration change was to create a common configuration file in the conf.d directory:
/etc/syslog-ng/conf.d/splunk.conf (or similar)

options {
    create_dirs (yes);
    dir-group(splunk);
    dir_perm(0770);
    group(splunk);
    perm(0770);
};

I was using the user "splunk" to run the heavy forwarder, a universal forwarder would also work as the primary purpose was to read the files created by the syslogNG listeners.

Example TCP, TCP + SSL and UDP listeners

Here's an example TCP listener, this file was created:
/etc/syslog-ng/buckets.d/tcp8100

source s_tcp8100 {  tcp(port(8100) max-connections(100)); };
destination d_tcp8100 { file ("/var/log/syslogngfeeds/datapower/datapower_${HOUR}.log");
};
log {source(s_tcp8100); destination(d_tcp8100);
};

Due to the size of the logs I chose to create a new log file every hour, you can also use other macros for a per-day log file or similar, furthermore information is available in the syslogNG manual.

Some servers supported TCP + SSL, in syslogNG you can configure the SSL settings per listener port which is quite useful.

I created the file:
tcpssl8310

source s_tcpssl8310 {  tcp(port(8310) max-connections(400) tls(key_file("/etc/syslog-ng/SSL/splunk-indexer.key") cert_file("/etc/syslog-ng/SSL/splunk-indexer.cer") ca_dir("/etc/syslog-ng/SSL/CACertificate.pem") peer-verify(optional-untrusted) ) ); };
destination d_tcpssl8310 { file ("/var/log/syslogngfeeds/prd_X/X_${HOUR}.log");
};
log {source(s_tcpssl8310); destination(d_tcpssl8310);
};

Note that I could only get the above configuration working on RHEL7, RHEL6 did not work with SSL at all.
Furthermore I found that if I did not have appropriate read permissions on the certificate/key file the syslogNG would crash if I attempted to access to the SSL port (it would work fine until a client attempted to talk to use the SSL port).

FInally, if you want SSLv3 disabled, in newer syslogNG instances you can disable SSLv3, due to RHEL7 using version 3.5.x of syslogNG you must use the cipher suite to control SSL versions:

cipher_suite("ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:+HIGH:!SSLv2:!SSLv3")

This disables everything excluding TLS1.2 so use this carefully!
ssl-options() is available in syslogNG 3.7 or newer to disable SSLv3 or other chosen protocols.

FInally, a number of devices/servers only support udp traffic, here is an example udp file:
udp9100

source s_udp9100 {  udp(port(9100)); };
destination d_udp9100 { file ("/var/log/syslogngfeeds/solarwinds/solarwinds_${HOUR}.log");
};
log {source(s_udp9100); destination(d_udp9100);
};

Customising the file output from syslogNG

The macro of ${HOUR} can be used with something else, it adds a 00, 01, 20 or similar hour number to each file, combined with a find -type f +mmin and an -exec with an rm command to rotate logs every X hours.

Note that the default output is :

Sep  7 18:34:25 10.139.130.134 Wed Sep 7 18:34:25 AEST 2016

Where in the above case the "Wed Sep 7 18:34:25" was coming from the source system, not from syslogNG.

I had some cases where I had to disable the parsing of log data because syslogNG was rejecting the data, for this you can use falgs(no-parse), Splunk TCP ports never rejected data in my testing:

source s_tcpssl8310 {  tcp(port(8310) max-connections(600) tls(key_file("/etc/syslog-ng/SSL/x.key") cert_file("/etc/syslog-ng/SSL/x.cer") ca_dir("/etc/syslog-ng/SSL/CACertificate.pem") peer-verify(optional-untrusted)) flags(no-parse)); };
destination d_tcpssl8310 { file ("/var/log/syslogngfeeds/x_${HOUR}.log" template("$MSGONLY\n"));
};
log {source(s_tcpssl8310); destination(d_tcpssl8310);
};

In the above example I've also changed the template settings to remove the date/time data from the output file, the template string $MSGONLY is just the message, $HOST can be used, the default settings are:

template("$ISODATE $HOST $MSGHDR$MSG\n");

Testing the syslogNG inputs

For testing purposes I did this for UDP testing:

echo -n `date` | nc -u -w1 serverName udpPortNumber

Note that I had to have netcat installed, even the nmap compatible version available on RHEL 7 worked fine.

TCP ports can be tested via telnet:

telnet <syslogNG server> <TCP port>

Typing anything in whilst connected should end up in the syslogNG destination file.

SSL/TLS connections can be tested via a web browser, if you send your firefox/chrome or IE to the syslogNG server:port number, it should hang while loading a blank page, however at the same time you should see some output in the destination log file of syslogNG.

Reading the log files back into Splunk

I hardcoded the source as syslog: and the sourcetype as the appropriate type I was expecting on the required port number of syslogNG, otherwise I used wildcards to pickup the per-hour log files.

Cleaning up the log files

I used crontab:
10 * * * * find /var/log/syslogngfeeds -type f -mmin +241 -exec rm '{}' \;

But this can be customised to your individual needs, since the previous solution was streaming via Splunk I decided that having even a few hours of log files on disk provides enough time for any outages to the Splunk heavy forwarders running on the same server.

You could choose to keep the log files for days if you have appropriate disk space, or only roll them weekly if you prefer!

Any feedback is welcome...

Monitoring

SA-syslog_collection provides a script to gather stats on the UDP interface:

Furthermore the syslogNG on RHEL servers dumps log statistics into the /var/log/messages file by default to advise how many events were processed for each source/destination, either can be used for monitoring syslogNG via Splunk...

MuS
SplunkTrust
SplunkTrust

Hi garethatiag,

read this great post of my fellow SplunkTrust member starcher : http://www.georgestarcher.com/splunk-success-with-syslog/
It will show you exactly how it can be done and why it's better to use a syslog server instead of Splunk.

Hope this helps ...

cheers, MuS

0 Karma

gjanders
SplunkTrust
SplunkTrust

Thanks, I just answered the question and accepted it but that post is also useful! I would consider my answer more comprehensive...

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...