Hello
I currently need to filter out some logs from our HaProxy Server. Have a UniversalForwarder installed on the haproxy box and it is currently logging everything locally through syslog to /var/log/haproxy.log
It is logging all 200 logs which generate thousands of logs an hour which we dont need. I have tried creating a props and transform to do this as follows. I am possibly not placing them in the right location?
props.conf
[sourcetype::/var/log/haproxy.log]
TRANSFORM-null = setnull
transform.conf
[setnull]
REGEX = 200 #Looking for the string 200 in the log
DEST_KEY = queue
FORMAT = nullQueue
I thought putting this on the indexers would then filter out all logs with "200" in them, but it did nothing.
Next I tried editing the haproxy syslog config itself.
When I added the following code it completely killed all log flow, not just 200.
if ( \
$programname contains 'haproxy' and \
not ($msg contains ' 200 ' ) \
)
then -/var/log/haproxy.log
These are the two solutions, editing the rsyslog config and creating transforms and props edits, that are most advertised in my searches as solutions but I cannot seem to get them to work.
Any help would be greatly appreciated!
We have figured out the solution to this. The props and transforms was not working for us so we went to the syslog config and try to filter at the source.
$ModLoad imudp
#Opens Port 514 to listen for haproxy messages
$UDPServerAddress 127.0.0.1
$UDPServerRun 514
#$template Haproxy, "msg%n"
:msg, regex, " 200 " ~
local0.=info -/var/log/haproxy.log
The regex was the answer to this for us. It is searching for a 200 status code and filtering them out then allowing whats left through to the log file that splunk is monitoring.
Here is how we removed all 200 entries in the haproxy.log. Simply added this line to /etc/rsyslog.conf:
if ($programname == "haproxy" and not ($msg contains " 200 ")) then /var/log/haproxy.log
The problem was your inlined comment here: REGEX = 200 #Looking for the string 200 in the log
You must never do that; it was being used as part of your REGEX
.
We have figured out the solution to this. The props and transforms was not working for us so we went to the syslog config and try to filter at the source.
$ModLoad imudp
#Opens Port 514 to listen for haproxy messages
$UDPServerAddress 127.0.0.1
$UDPServerRun 514
#$template Haproxy, "msg%n"
:msg, regex, " 200 " ~
local0.=info -/var/log/haproxy.log
The regex was the answer to this for us. It is searching for a 200 status code and filtering them out then allowing whats left through to the log file that splunk is monitoring.
I attempted the props.conf and tranforms.conf fix on my indexers to no avail. All logs are still coming through, unless my REGEX of 200 inst actually finding the ' 200 ' string that is in the haproxy log.
My rsyslog haproxy config looks like this.
$ModLoad imudp
#Opens Port 514 to listen for haproxy messages
$UDPServerAddress 127.0.0.1
$UDPServerRun 514
$template Haproxy, "msg%n"
if $programname startswith 'haproxy' then /var/log/haproxy.log
#Defines the http log will be saved in haproxy.log
#Logs Everything
local0.=info -/var/log/haproxy.log;HaProxy
#Keeps logs in local host
local0.* ~
I have attempted to filter by $msg contains ' 200 ' in the RainerScript but it seems to filter out everything when I do so.
The props and transforms can go on the indexer or a heavy forwarder because both support sending to nullQueue.
However, the major issue i see is that it appears you have an incorrect props.conf, try these settings instead:
props.conf
[source::/var/log/haproxy.log]
TRANSFORMS-null = setnull
Note there are two changes... sourcetype becomes source, and TRANSFORM becomes TRANSFORMS.
transforms.conf
[setnull]
REGEX = 200
DEST_KEY = queue
FORMAT = nullQueue
No difference from what you had.
Note according to this: https://wiki.splunk.com/Community:HowIndexingWorks
UFs support nullQueue as well, but most of the other documentation says Queue routing can only be done on a HF or IDX.... your mileage may vary
Regarding placing of the props and transform. Do they go into the haproxy add-on app folder or into the indexers etc/system/local folder?
Beware of the spelling, configuration file name is transforms.conf
not transform. You could place it inside the add-on created and deploy/install into the Indexer.
"I am possibly not placing them in the right location?"
From http://docs.splunk.com/Documentation/SplunkCloud/6.6.1/Forwarding/Routeandfilterdatad
"You can configure routing only on a heavy forwarder. "
If you placed the filtering-configuration on your Universal Forwarder that is the wrong place since the UF does not parse the data. In other words it 's not looking into your data and applying your transformation.
You have to place the configuration on the first splunk-system thats acutaly parsing, which is most likely your indexer.
Sincerely