Splunk Enterprise

Splunk Light and Universal Forwarder: nullQueue not working

enderless
New Member

Running a Splunk light instance with Linux/Universal Forwarders and I can't seem to filter out data. Reading up doc's, I understand that UF's do not support/read custom props/transforms, so I've configured the following on my indexer -

# pwd
/opt/splunk/etc/system/local

props.conf

[source::/var/log/syslog]
TRANSFORMS-set= snmpdSetNull

transforms.conf

[snmpdSetNull]
FORMAT=nullQueue
DEST_KEY=queue
REGEX = snmpd\[[0-9]{1,}\]\:\ Connection\ from\ UDP\:

These should be picked up immediately with a new query (from what I understand) but I have both manually refresh by hitting http://splunkUrl:8000/en-US/debug/refresh/ and fully bouncing the Splunk instance. New instances of the query are still being indexed and are searchable, following format:

Apr 29 12:58:38 serverName snmpd[1153]: Connection from UDP: [10.10.10.48]:52282->[10.10.10.31]:161
Apr 29 12:58:38 serverName snmpd[1153]: Connection from UDP: [10.10.10.48]:52283->[10.10.10.31]:161
Apr 29 12:58:38 serverName snmpd[1153]: Connection from UDP: [10.10.10.48]:52284->[10.10.10.31]:161
Apr 29 12:58:38 serverName snmpd[1153]: Connection from UDP: [10.10.10.48]:52285->[10.10.10.31]:161
Apr 29 12:58:38 serverName snmpd[1153]: Connection from UDP: [10.10.10.48]:52286->[10.10.10.31]:161
Apr 29 12:58:38 serverName snmpd[1153]: Connection from UDP: [10.10.10.48]:52287->[10.10.10.31]:161
Apr 29 12:58:38 serverName snmpd[1153]: Connection from UDP: [10.10.10.48]:52288->[10.10.10.31]:161`

Regex appears valid as I can check with rex and match events:

host=serverName | rex field=Event "(?<snmpd>snmpd\[[0-9]{1,}\]\:\ Connection\ from\ UDP\:)"

I want to completely drop events matching the regex from being indexed. From what I've read, UF's will still send data "across the line" but the above nullQueue statement should prevent events from being indexed. I'm still seeing new events populate though so I'm wondering if it's a syntax issue, my props/transforms is being overridden somewhere, or whether or not this is supported at all with my current set up. Any help would be much appreciated!

/opt/splunk/bin/splunk cmd btool props list

...
...
[source::/var/log/syslog]
ANNOTATE_PUNCT = True
AUTO_KV_JSON = true
BREAK_ONLY_BEFORE =
BREAK_ONLY_BEFORE_DATE = True
CHARSET = UTF-8
DATETIME_CONFIG = /etc/datetime.xml
HEADER_MODE =
LEARN_SOURCETYPE = true
LINE_BREAKER_LOOKBEHIND = 100
MAX_DAYS_AGO = 2000
MAX_DAYS_HENCE = 2
MAX_DIFF_SECS_AGO = 3600
MAX_DIFF_SECS_HENCE = 604800
MAX_EVENTS = 256
MAX_TIMESTAMP_LOOKAHEAD = 128
MUST_BREAK_AFTER =
MUST_NOT_BREAK_AFTER =
MUST_NOT_BREAK_BEFORE =
SEGMENTATION = indexing
SEGMENTATION-all = full
SEGMENTATION-inner = inner
SEGMENTATION-outer = outer
SEGMENTATION-raw = none
SEGMENTATION-standard = standard
SHOULD_LINEMERGE = True
TRANSFORMS =
TRANSFORMS-set = snmpdSetNull
TRUNCATE = 10000
detect_trailing_nulls = false
maxDist = 100
priority =
sourcetype =

...
...

/opt/splunk/bin/splunk cmd btool transforms list

...
...
[snmpdSetNull]
CAN_OPTIMIZE = True
CLEAN_KEYS = True
DEFAULT_VALUE =
DEST_KEY = queue
FORMAT = nullQueue
KEEP_EMPTY_VALS = False
LOOKAHEAD = 4096
MV_ADD = False
REGEX = snmpd\[[0-9]{1,}\]\:\ Connection\ from\ UDP\:
SOURCE_KEY = _raw
WRITE_META = False
...
...
Labels (2)
Tags (2)
0 Karma

_olivier_
Explorer

Hi @enderless ,

 

I'm facing the same issue, did you found a way to drop your events ?

 

Thanks

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Please don't dig up a several years old thread. Just create your own, describe your environment, what your problem is and what you tried so far (your problem might actually be something completely different than the one in this thread). You can include a link to this thread for reference that your symptoms are similar.

0 Karma

DEAD_BEEF
Builder

As you noted, a UF cannot utilize a transforms to dump to nullQueue. So you need to monitor the data with a UF and then send it to your indexer for parsing.

One thing that looks odd is that in your props.conf you have [source.... which is odd because that would imply the logs are on your indexer themselves (based on the path) which makes me wonder why you have a UF. Instead, I presume you already have the sourcetype declared by your UF's inputs.conf, so on the IDX props you should just have [your_sourcetype] instead. If this works, please accept this as the answer.

UF inputs.conf

[monitor://...path_to_logs]
index = my_index
sourcetype = my_sourcetype

IDX props.conf

[my_sourcetype]
TRANSFORMS-dump_snmpd = snmpdSetNull

IDX transforms.conf

[snmpdSetNull]
REGEX = snmpd\[\d+\]: Connection from UDP:
DEST_KEY = queue
FORMAT = nullQueue

And from the inputs, transforms, and props docs, you will need to restart splunk service

# To use one or more of these configurations, copy the configuration block
# into transforms.conf in $SPLUNK_HOME/etc/system/local/. You must restart
# Splunk to enable configurations
0 Karma

enderless
New Member

You were correct, I was setting my source on the IDX (instead of UF). I've made the suggested changes but I'm still seeing new data populated in the search (after bouncing the UF and IDX splunk instances). Anything wrong with the following?

/opt/splunkforwarder/etc/system/local# cat inputs.conf

[default]
host = serverName
[monitor:///var/log/syslog]
index = main
sourcetype = systemSyslog

root@IDX:/opt/splunk/etc/system/local# cat props.conf

[systemSyslog]
TRANSFORMS-dump_snmpd = snmpdSetNull

root@IDX:/opt/splunk/etc/system/local# cat transforms.conf

[snmpdSetNull]
REGEX = snmpd\[\d+\]: Connection from UDP:
DEST_KEY = queue
FORMAT = nullQueue
0 Karma

DEAD_BEEF
Builder

Everything looks right imo. The only thing I can think of to try is make the regex even more specific (regex match visible here). Also, to be clear, this will only change future data, it will not eliminate/delete any existing logs that inadvertently were put into your index. This isn't a retroactive fix.

REGEX = ^\w+ \d+ \d+:\d+:\d+ \w+ snmpd\[\d+\]: Connection from UDP:
0 Karma

enderless
New Member

Yea, it is frustrating. None of the regex statements (mine or the 2 that you've provided) seem to make a difference.

Also, to be clear, this will only change future data, it will not eliminate/delete any existing logs that inadvertently were put into your index.

Yup, I understand that. New data is still being populated in the index:

serverName (/var/log/syslog):

May  6 17:20:54 serverName snmpd[1153]: Connection from UDP: [10.10.10.48]:52710->[10.10.10.31]:161
May  6 17:20:54 serverName snmpd[1153]: Connection from UDP: [10.10.10.48]:52711->[10.10.10.31]:161
May  6 17:20:54 serverName snmpd[1153]: Connection from UDP: [10.10.10.48]:52712->[10.10.10.31]:161
May  6 17:20:54 serverName snmpd[1153]: Connection from UDP: [10.10.10.48]:52713->[10.10.10.31]:161
May  6 17:20:54 serverName snmpd[1153]: Connection from UDP: [10.10.10.48]:52714->[10.10.10.31]:161
May  6 17:20:54 serverName snmpd[1153]: Connection from UDP: [10.10.10.48]:52715->[10.10.10.31]:161

Search from splunk UI (host=serverName "Connection from")

May  6 17:20:54 serverName snmpd[1153]: Connection from UDP: [10.10.10.48]:52715->[10.10.10.31]:161
May  6 17:20:54 serverName snmpd[1153]: Connection from UDP: [10.10.10.48]:52714->[10.10.10.31]:161
May  6 17:20:54 serverName snmpd[1153]: Connection from UDP: [10.10.10.48]:52713->[10.10.10.31]:161
May  6 17:20:54 serverName snmpd[1153]: Connection from UDP: [10.10.10.48]:52712->[10.10.10.31]:161
May  6 17:20:54 serverName snmpd[1153]: Connection from UDP: [10.10.10.48]:52711->[10.10.10.31]:161 
May  6 17:20:54 serverName snmpd[1153]: Connection from UDP: [10.10.10.48]:52710->[10.10.10.31]:161
0 Karma

DEAD_BEEF
Builder

Okay, just thinking. Check the UF and see if it was a full install or the UF install. This is really obvious via path name (/opt/splunk/... vs /opt/splunkforwarder/...)

If it was a full install, you need to set it so it only functions as a forwarder.

0 Karma

enderless
New Member

UF version 6.4. A bit older than the latest Splunk UF release. Not sure if that would affect functionality of what I'm looking to achieve here.

root@serverName:/opt# dpkg -l | grep splunk
ii splunkforwarder 6.2.2 amd64 Splunk The platform for machine data.
root@serverName:/opt# uname -r
3.16.0-4-amd64
root@serverName:/opt# cat /etc/debian_version
8.6

0 Karma

DEAD_BEEF
Builder

Try modifying the regex again, use the following and don't forget to bounce the indexer for the new regex to take:

REGEX=snmpd\[\d+\]:\sConnection\sfrom\sUDP:
0 Karma

enderless
New Member

Modified transforms.conf on the IDX and restarted it (as well as the UF):

[snmpdSetNull]
REGEX=snmpd\[\d+\]:\sConnection\sfrom\sUDP:
DEST_KEY = queue
FORMAT = nullQueue

2 hours later and I'm still seeing snmpd entries being updated. Not sure what the hangup is here but the IDX seems more than happy to keep on indexing these log entries!

0 Karma

DEAD_BEEF
Builder

Hmm I'm not sure. I would 100% open up a case with support and see what they say. Your configs look right and that should def. be dropping those logs.

0 Karma

enderless
New Member

I guess I should note that I'm assuming that the Splunk Light deployment is an "all in one" and houses the indexer (hence why I'm handling my props.conf and transforms.conf there). Any insight in to this issue would be very helpful!

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...