Splunk Search

How to Silently Drop Events to nullQueue While Logging Skipped Event Metadata to a File in a Custom TA

asees
Explorer

I am building a custom Technology Add-on (TA) where I need to silently drop specific events using nullQueue but also log metadata about those dropped events to a separate log file for auditing purposes.

Here’s my scenario:

My Current Setup

  1. props.conf:

    [custom:app]
    TRUNCATE = 0
    TRANSFORMS-routing = route_network, route_app_events
  2. transforms.conf:

    # Drop all network heartbeat events
    [route_network]
    REGEX = .*CEF:0\|MyCompany\|NetworkMonitor\|[^|]+\|[^|]+\|Heartbeat\|
    DEST_KEY = queue
    FORMAT = nullQueue

    # Drop specific Windows events coming in CEF
    [route_app_events]
    REGEX = .*CEF:0\|Microsoft\|Windows\|[^|]+\|[^|]+\|(AppCrash|UpdateService|Security-Auditing|LicensingService)\|
    DEST_KEY = queue
    FORMAT = nullQueue

With the above configuration:

  • Any events matching these rules are discarded silently — which works perfectly.

  • However, I also need to log each dropped event type to a file like this:

    [2025-09-08 14:05:22] Network heartbeat event skipped
    [2025-09-08 14:10:37] Windows AppCrash event skipped

My Requirement

I need to:

  1. Continue silently dropping these events using nullQueue (no indexing or storage in Splunk index).

  2. Simultaneously write a small log entry to a file (e.g., $SPLUNK_HOME/var/log/splunk/skipped_events.log) whenever an event is skipped, for operational tracking.

Labels (1)
0 Karma

PickleRick
SplunkTrust
SplunkTrust

There's no way to do it using built-in props/transforms functionality. Yes, you can filter out events. Yes, you could strip them to some minimal version and redirect to another index. No, you cannot write to a text file.

A very very very ugly walkaround could be to reroute such events to syslog and set up a local syslog receiver but this is a Very Very Bad Idea (tm).

0 Karma

asees
Explorer

@PickleRick 
Is there any way we can use python script in anyway to achieve this?

0 Karma

PickleRick
SplunkTrust
SplunkTrust

If the data is already ingested into Splunk's "pipeline" - no.

You could use python to create a modular input but that would work on an earlier step - betore the data is injected into input queue.

richgalloway
SplunkTrust
SplunkTrust

You'll need to create a modular input to do that.  Use regular expressions to test the incoming data, discard matches and log the activity.

---
If this reply helps you, Karma would be appreciated.
0 Karma

asees
Explorer

@richgalloway Hey, can you please explain me how to do it?

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Community Content Calendar, September edition

Welcome to another insightful post from our Community Content Calendar! We're thrilled to continue bringing ...

Splunkbase Unveils New App Listing Management Public Preview

Splunkbase Unveils New App Listing Management Public PreviewWe're thrilled to announce the public preview of ...

Leveraging Automated Threat Analysis Across the Splunk Ecosystem

Are you leveraging automation to its fullest potential in your threat detection strategy?Our upcoming Security ...