Getting Data In

UF+Indexer+nullQueue/Route = Zero

badland
Explorer

Hi,

I need some help 🙂

scheme: 3 Universal Forwarders -> collecting/forwarding -> Indexer

uf:
Changed every UF host (windows:applications and services logs) from to .

indexer:

I added a tcp listener in: Manager -> Forwarding and receiving -> Configure receiving

inputs.conf:

[default]
host = splunk.domain.local
        
[script://$SPLUNK_HOME\bin\scripts\splunk-wmi.path]
disabled = 0
        
[WinEventLog:Application]
disabled = 1
        
[WinEventLog:ForwardedEvents]
disabled = 1
        
[WinEventLog:HardwareEvents]
disabled = 1
        
[WinEventLog:Internet Explorer]
disabled = 1
        
[WinEventLog:Security]
disabled = 1
        
[WinEventLog:Setup]
disabled = 1
        
[WinEventLog:System]
disabled = 1

props.conf:

[host::*.domain.local]
TZ = GMT+4
TRANSFORMS-set= setnull,setdbls,kix_exclude_dbls

transforms.conf:

[setnull]
REGEX = .
DEST_KEY = queue
FORMAT = nullQueue
    
[setdbls]
REGEX = (?msi)^EventType=(1|2)
DEST_KEY = _MetaData:Index
FORMAT = db_ls
    
[kix_exclude_dbls]
REGEX = (?msi)^EventCode=(1722|1332|53).+ComputerName=E[1-5]TS1
DEST_KEY = queue
FORMAT = nullQueue

If I comment [setnull] block, all works fine. But logs, which are not EventType=(1|2), will be collected in the default index. If I enable the [setnull] block, ALL logs will be removed. However, I want to put [setdbls] in the "db_ls" index and remove the others.

Thanks.

0 Karma
1 Solution

kristian_kolb
Ultra Champion

There error here seems to be a mixup of configurations and concepts (nullQueueing and index-time transformation in general). Considering your props.conf settings;

[your host, source or sourcetype]
TRANSFORMS-blah= setnull, setdbls, kix_exclude_dbls

will take each event of the host/source/sourcetype through the three transforms.

First the destination queue will be set to the nullQueue for all events and the index will be main, unless you have specified a different index in inputs.conf.

Second, if the regex matches in [setdbls] the destination index will be set to db_ls, but the destination queue will still be nullQueue. Thus all events will be deleted.

The third transform will not make a difference.

If you comment out the first transform [setnull], no events will have the nullQueue set, and events will flow into the db_ls index (when the REGEX matches).


Solution:

To achieve the desired results I would suggest that you set the following;

inputs.conf (where the files are read / scripts are executed

[monitor / script / WinEventLog:blah blah blah]
disabled = 0
index=db_ls

props.conf (on the indexer)

[host / source / sourcetype]
TRANSFORMS-blah_null = setnull, setdbls, kix_exclude

transforms.conf (on the indexer)
under [setdbls] change to

DEST_KEY=queue
FORMAT=indexQueue

That way the correct index will be set from the start, and the transformations will only deal with the queues.

Hope this helps,

K

View solution in original post

badland
Explorer

Thank you, Kristian Kolb! Very informative answer 🙂

Maybe it can be useful for someone:
I changed my aim from wmi::applications/system to wmi::security. Let it be as an example.
After Kristian's kick to the right way 🙂 I removed from "Manager -> Forwarding and receiving -> Configure receiving" all tcp receivers. Then I set only one in inputs.conf (indexer side).

inputs.conf (indexer.conf)

[splunktcp://10997]
disabled = 0

As Damien Dallimore (thx too) said in that post even if you install a simple uf, you will be able to change the index on the forwarder side. This is only the one right way.

inputs.conf (universal forwarder side)

[default]
index = db_ls

Now all logs stream to the right index::db_ls. Then I removed any old rules from props.conf and transforms.conf (both on indexer side). Set the new rules for incoming wmi::security traffic:

props.conf (indexer side)

[WinEventLog:Security]
priority = 5
TRANSFORMS-wmisecr=setnull,setsecrdbls

Then I described actions for these rules in transforms.conf :: set the nullQueue for all default queue for the index db_ls -> set the indexQueue only for REGEX pattern:

transforms.conf (indexer side)

[setnull]
REGEX = .
DEST_KEY = queue
FORMAT = nullQueue

[setsecrdbls]
REGEX = (?msi)^EventCode=(528|538|529)
DEST_KEY = queue
FORMAT = indexQueue

Now all logs're forwarded from universal forwarder (some node) to the Indexer, to the right index and right queue. In the queue I can filter unnecessary events and leave only important.

Nevertheless, can you point me out to the article where I can read about queues?
Thank you!

0 Karma

badland
Explorer

Thank you again! I'll change my input as you recomended.
Have a nice day =]

0 Karma

kristian_kolb
Ultra Champion

Here are a few links to information regarding queues;

In practice you'll probably only use nullQueue and indexQueue in your configurations. Other queues like typingQueue and aggQueue will only reveal themselves when there is a problem, like with blocked queues.

http://wiki.splunk.com/Community:HowIndexingWorks
http://answers.splunk.com/answers/7076/questions-about-splunk-queues
http://answers.splunk.com/answers/83334/what-are-the-various-queues-in-splunk

kristian_kolb
Ultra Champion

Short comment first: you should set your index=blahblah for each [monitor] (or [WinEventLog:xxx] or [script:xxxx]) in inputs.conf. Having it under [default] will work, but if you have more than one input, and want them in separate indexes, you'll want to do this.

I recommend that you always set index and sourcetype for each input separately.

/K

kristian_kolb
Ultra Champion

There error here seems to be a mixup of configurations and concepts (nullQueueing and index-time transformation in general). Considering your props.conf settings;

[your host, source or sourcetype]
TRANSFORMS-blah= setnull, setdbls, kix_exclude_dbls

will take each event of the host/source/sourcetype through the three transforms.

First the destination queue will be set to the nullQueue for all events and the index will be main, unless you have specified a different index in inputs.conf.

Second, if the regex matches in [setdbls] the destination index will be set to db_ls, but the destination queue will still be nullQueue. Thus all events will be deleted.

The third transform will not make a difference.

If you comment out the first transform [setnull], no events will have the nullQueue set, and events will flow into the db_ls index (when the REGEX matches).


Solution:

To achieve the desired results I would suggest that you set the following;

inputs.conf (where the files are read / scripts are executed

[monitor / script / WinEventLog:blah blah blah]
disabled = 0
index=db_ls

props.conf (on the indexer)

[host / source / sourcetype]
TRANSFORMS-blah_null = setnull, setdbls, kix_exclude

transforms.conf (on the indexer)
under [setdbls] change to

DEST_KEY=queue
FORMAT=indexQueue

That way the correct index will be set from the start, and the transformations will only deal with the queues.

Hope this helps,

K

bhargavi
Path Finder

Hi @kristian_kolb @badland,

I have a little different scenario but facing a similar issue. We are integrating the json logs via HEC into Splunk Heavy Forwarder.
I have tried the below configurations.I am applying the props for the source. In transforms, there are different regexes and I would want to route it to different indexes based on log files and route all the other files not required to a null queue. I would not be able to use FORMAT=indexqueue in transforms.conf as I cannot mention multiple indexes in inputs.conf .This is not working and I am not getting results as expected. Kindly help.

The configs are like below:

PROPS.CONF --

[source::*model-app*]
TRANSFORMS-segment=setnull,security_logs,application_logs,provisioning_logs

TRANSFORMS.CONF --

[setnull]
REGEX=class\"\:\"(.*?)\"
DEST_KEY = queue
FORMAT = nullQueue

[security_logs]
REGEX=(class\"\:\"(/var/log/cron|/var/log/audit/audit.log|/var/log/messages|/var/log/secure)\")
DEST_KEY=_MetaData:Index
FORMAT=model_sec
WRITE_META=true
LOOKAHEAD=40000

[application_logs]
REGEX=(class\"\:\"(/var/log/application.log|/var/log/local*?.log)\")
DEST_KEY=_MetaData:Index
FORMAT=model_app
WRITE_META=true
LOOKAHEAD=40000

[provisioning_logs]
REGEX=class\"\:\"(/opt/provgw-error_msg.log|/opt/provgw-bulkrequest.log|/opt/provgw/provgw-spml_command.log.*?)\"
DEST_KEY=_MetaData:Index
FORMAT=model_prov
WRITE_META=true

0 Karma

kristian_kolb
Ultra Champion

You seem to be doing the same mistake that the OP did, mixing nullQueueing with index-transformation in general.

While there might be more clever ways to solve this HEC-wise or not sending the unwanted stuff in the first place, this should work as a general principle:

  1. Set queue = nullQueue for all events
  2. Set queue = indexQueue for those events you want to keep, i.e. regex matching file names for any file you want to keep (i.e. the ones in the regexes from the security, application or provisioning parts)
  3. Set _MetaData:Index = xxx  (basically the existing stuff)

 

Props.conf ---

[blablah]

TRANSFORMS-dostuff =setnullq, keepsome, setindexsec, setindexapp, setindexprov

 

transforms.conf --- as before but with the addition:

[keepsome]

REGEX = insert your regex here

DEST_KEY = queue

FORMAT= indexQueue

0 Karma

badland
Explorer

Here is my full answer --> Advanced solution

0 Karma

MuS
Legend

Hi badland

check your props with btool:

$SPLUNK_HOME/bin/splunk cmd btool props list 

also keep in mind each change in props and/or transforms needs a reload. this can be done with this search command on the fly:

| extract reload=T

here are some sources which are useful in this case:

hope this helps, cheers - MuS

0 Karma
Get Updates on the Splunk Community!

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...

Cloud Platform & Enterprise: Classic Dashboard Export Feature Deprecation

As of Splunk Cloud Platform 9.3.2408 and Splunk Enterprise 9.4, classic dashboard export features are now ...

Explore the Latest Educational Offerings from Splunk (November Releases)

At Splunk Education, we are committed to providing a robust learning experience for all users, regardless of ...