Splunk Search

How to Silently Drop Events to nullQueue While Logging Skipped Event Metadata to a File in a Custom TA

asees
Explorer

I am building a custom Technology Add-on (TA) where I need to silently drop specific events using nullQueue but also log metadata about those dropped events to a separate log file for auditing purposes.

Here’s my scenario:

My Current Setup

  1. props.conf:

    [custom:app]
    TRUNCATE = 0
    TRANSFORMS-routing = route_network, route_app_events
  2. transforms.conf:

    # Drop all network heartbeat events
    [route_network]
    REGEX = .*CEF:0\|MyCompany\|NetworkMonitor\|[^|]+\|[^|]+\|Heartbeat\|
    DEST_KEY = queue
    FORMAT = nullQueue

    # Drop specific Windows events coming in CEF
    [route_app_events]
    REGEX = .*CEF:0\|Microsoft\|Windows\|[^|]+\|[^|]+\|(AppCrash|UpdateService|Security-Auditing|LicensingService)\|
    DEST_KEY = queue
    FORMAT = nullQueue

With the above configuration:

  • Any events matching these rules are discarded silently — which works perfectly.

  • However, I also need to log each dropped event type to a file like this:

    [2025-09-08 14:05:22] Network heartbeat event skipped
    [2025-09-08 14:10:37] Windows AppCrash event skipped

My Requirement

I need to:

  1. Continue silently dropping these events using nullQueue (no indexing or storage in Splunk index).

  2. Simultaneously write a small log entry to a file (e.g., $SPLUNK_HOME/var/log/splunk/skipped_events.log) whenever an event is skipped, for operational tracking.

Labels (1)
0 Karma

PickleRick
SplunkTrust
SplunkTrust

There's no way to do it using built-in props/transforms functionality. Yes, you can filter out events. Yes, you could strip them to some minimal version and redirect to another index. No, you cannot write to a text file.

A very very very ugly walkaround could be to reroute such events to syslog and set up a local syslog receiver but this is a Very Very Bad Idea (tm).

0 Karma

asees
Explorer

@PickleRick 
Is there any way we can use python script in anyway to achieve this?

0 Karma

PickleRick
SplunkTrust
SplunkTrust

If the data is already ingested into Splunk's "pipeline" - no.

You could use python to create a modular input but that would work on an earlier step - betore the data is injected into input queue.

richgalloway
SplunkTrust
SplunkTrust

You'll need to create a modular input to do that.  Use regular expressions to test the incoming data, discard matches and log the activity.

---
If this reply helps you, Karma would be appreciated.
0 Karma

asees
Explorer

@richgalloway Hey, can you please explain me how to do it?

0 Karma
Get Updates on the Splunk Community!

Building Reliable Asset and Identity Frameworks in Splunk ES

 Accurate asset and identity resolution is the backbone of security operations. Without it, alerts are ...

Cloud Monitoring Console - Unlocking Greater Visibility in SVC Usage Reporting

For Splunk Cloud customers, understanding and optimizing Splunk Virtual Compute (SVC) usage and resource ...

Automatic Discovery Part 3: Practical Use Cases

If you’ve enabled Automatic Discovery in your install of the Splunk Distribution of the OpenTelemetry ...