Getting Data In

How to filter syslog HaProxy 200 status Logs?

jkostovich
Explorer

Hello

I currently need to filter out some logs from our HaProxy Server. Have a UniversalForwarder installed on the haproxy box and it is currently logging everything locally through syslog to /var/log/haproxy.log

It is logging all 200 logs which generate thousands of logs an hour which we dont need. I have tried creating a props and transform to do this as follows. I am possibly not placing them in the right location?

props.conf 

[sourcetype::/var/log/haproxy.log]
TRANSFORM-null = setnull

transform.conf

[setnull]
REGEX =  200 #Looking for the string 200 in the log
DEST_KEY = queue
FORMAT = nullQueue

I thought putting this on the indexers would then filter out all logs with "200" in them, but it did nothing.

Next I tried editing the haproxy syslog config itself.

When I added the following code it completely killed all log flow, not just 200.

if ( \
      $programname contains 'haproxy' and \
      not ($msg contains ' 200 ' ) \
   )
then -/var/log/haproxy.log

These are the two solutions, editing the rsyslog config and creating transforms and props edits, that are most advertised in my searches as solutions but I cannot seem to get them to work.

Any help would be greatly appreciated!

Labels (1)
0 Karma
1 Solution

jkostovich
Explorer

We have figured out the solution to this. The props and transforms was not working for us so we went to the syslog config and try to filter at the source.

$ModLoad imudp

#Opens Port 514 to listen for haproxy messages
$UDPServerAddress 127.0.0.1
$UDPServerRun 514
#$template Haproxy, "msg%n"

:msg, regex, " 200 " ~
local0.=info -/var/log/haproxy.log

The regex was the answer to this for us. It is searching for a 200 status code and filtering them out then allowing whats left through to the log file that splunk is monitoring.

View solution in original post

0 Karma

ethan1el
New Member

Here is how we removed all 200 entries in the haproxy.log. Simply added this line to /etc/rsyslog.conf:

 

if ($programname == "haproxy" and not ($msg contains " 200 ")) then /var/log/haproxy.log

 

 

 



0 Karma

woodcock
Esteemed Legend

The problem was your inlined comment here: REGEX = 200 #Looking for the string 200 in the log
You must never do that; it was being used as part of your REGEX.

jkostovich
Explorer

We have figured out the solution to this. The props and transforms was not working for us so we went to the syslog config and try to filter at the source.

$ModLoad imudp

#Opens Port 514 to listen for haproxy messages
$UDPServerAddress 127.0.0.1
$UDPServerRun 514
#$template Haproxy, "msg%n"

:msg, regex, " 200 " ~
local0.=info -/var/log/haproxy.log

The regex was the answer to this for us. It is searching for a 200 status code and filtering them out then allowing whats left through to the log file that splunk is monitoring.

0 Karma

jkostovich
Explorer

I attempted the props.conf and tranforms.conf fix on my indexers to no avail. All logs are still coming through, unless my REGEX of 200 inst actually finding the ' 200 ' string that is in the haproxy log.

My rsyslog haproxy config looks like this.

$ModLoad imudp

#Opens Port 514 to listen for haproxy messages
$UDPServerAddress 127.0.0.1
$UDPServerRun 514
$template Haproxy, "msg%n"

if $programname startswith 'haproxy' then /var/log/haproxy.log

#Defines the http log will be saved in haproxy.log
#Logs Everything
local0.=info -/var/log/haproxy.log;HaProxy
#Keeps logs in local host
local0.* ~

I have attempted to filter by $msg contains ' 200 ' in the RainerScript but it seems to filter out everything when I do so.

0 Karma

jkat54
SplunkTrust
SplunkTrust

The props and transforms can go on the indexer or a heavy forwarder because both support sending to nullQueue.

See this: http://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad#Filter_event_data_...

However, the major issue i see is that it appears you have an incorrect props.conf, try these settings instead:

props.conf

 [source::/var/log/haproxy.log]
 TRANSFORMS-null = setnull

Note there are two changes... sourcetype becomes source, and TRANSFORM becomes TRANSFORMS.

transforms.conf

 [setnull]
 REGEX =  200 
 DEST_KEY = queue
 FORMAT = nullQueue

No difference from what you had.

jkat54
SplunkTrust
SplunkTrust

Note according to this: https://wiki.splunk.com/Community:HowIndexingWorks

UFs support nullQueue as well, but most of the other documentation says Queue routing can only be done on a HF or IDX.... your mileage may vary

0 Karma

jkostovich
Explorer

Regarding placing of the props and transform. Do they go into the haproxy add-on app folder or into the indexers etc/system/local folder?

0 Karma

alemarzu
Motivator

Beware of the spelling, configuration file name is transforms.conf not transform. You could place it inside the add-on created and deploy/install into the Indexer.

0 Karma

hgrow
Communicator

"I am possibly not placing them in the right location?"

From http://docs.splunk.com/Documentation/SplunkCloud/6.6.1/Forwarding/Routeandfilterdatad

"You can configure routing only on a heavy forwarder. "

If you placed the filtering-configuration on your Universal Forwarder that is the wrong place since the UF does not parse the data. In other words it 's not looking into your data and applying your transformation.

You have to place the configuration on the first splunk-system thats acutaly parsing, which is most likely your indexer.

Sincerely

0 Karma
Get Updates on the Splunk Community!

Take Your Breath Away with Splunk Risk-Based Alerting (RBA)

WATCH NOW!The Splunk Guide to Risk-Based Alerting is here to empower your SOC like never before. Join Haylee ...

SignalFlow: What? Why? How?

What is SignalFlow? Splunk Observability Cloud’s analytics engine, SignalFlow, opens up a world of in-depth ...

Federated Search for Amazon S3 | Key Use Cases to Streamline Compliance Workflows

Modern business operations are supported by data compliance. As regulations evolve, organizations must ...