Getting Data In

Does Splunk process events before sending to nullqueue?

dsmc_adv
Path Finder

We have configured a default null queue to discard all events that we don't want to allow to be indexed without authorization. Our transforms have the first filter in transform to send to the null_qeue_filter.

The topic for this question is because I have events in our splunkd.log from a source which looks to be exceeding the linecount:

2/23/16 
5:34:11.169 PM  
02-23-2016 17:34:11.169 +0100 WARN  AggregatorMiningProcessor - Breaking event because limit of 256 has been exceeded 

But when I look at this specific sourcetype and data source, there are no events with more than one line, so my assumption is that those events exceeding the limits are those which are being discarded, in other words sent to the null queue. Is this the correct behavior ?

0 Karma
1 Solution

yannK
Splunk Employee
Splunk Employee

yes, if you look at the pipeline order , the nullqueue is happening after the parsing.
But before the license count.

Parsing (linebreak, encoding) -> aggregating (timestamp detection, event grouping )-> typing (regex, nullqueues, sedcmd ...) -> indexing (write to disk, count on license or forwarder on heavy forwarders)

View solution in original post

yannK
Splunk Employee
Splunk Employee

yes, if you look at the pipeline order , the nullqueue is happening after the parsing.
But before the license count.

Parsing (linebreak, encoding) -> aggregating (timestamp detection, event grouping )-> typing (regex, nullqueues, sedcmd ...) -> indexing (write to disk, count on license or forwarder on heavy forwarders)

dsmc_adv
Path Finder

is not Splunk in this case wasting resources parsing events that will be sent to trash ?

0 Karma

yannK
Splunk Employee
Splunk Employee

correct. the best method is not avoid monitoring the bad event in the first place.

But we cannot filter an event before it has been break down to a proper event, you do not want to delete a whole file if only one event match the filter 🙂

0 Karma
Get Updates on the Splunk Community!

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...

Cloud Platform & Enterprise: Classic Dashboard Export Feature Deprecation

As of Splunk Cloud Platform 9.3.2408 and Splunk Enterprise 9.4, classic dashboard export features are now ...