Getting Data In

Issue with props.conf and transforms.conf – Port-Based Filtering Not Working

Namchin_Bar
New Member

Dear Splunk Support,

I am encountering an issue while configuring Splunk to filter logs based on specific ports (21, 22, 23, 3389) using props.conf and transforms.conf. Despite following the proper configuration steps, the filtering is not working as expected. Below are the details:

System Details:

  • Splunk Version: [9.3.2]

  • Deployment Type: Heavy Forwarder

  • Log Source: /opt/log/indexsource/*

    Configuration Applied:

    props.conf (Located at $SPLUNK_HOME/etc/system/local/props.conf)

    [source::/opt/log/indexsource/*]
    TRANSFORMS-filter_ports = filter_specific_ports

    transforms.conf (Located at $SPLUNK_HOME/etc/system/local/transforms.conf)

    [filter_specific_ports]
    REGEX = .* (21|22|23|3389) .*
    DEST_KEY = queue
    FORMAT = indexQueue

    And trying someways such as:

    transforms.conf:

    [filter_ports]
    REGEX = (21|22|23|3389)
    DEST_KEY = queue
    FORMAT = indexQueue
    [drop_other_ports]
    REGEX = .
    DEST_KEY = queue
    FORMAT = nullQueueAnd


    AND

    props.conf:

    [source::your_specific_source]
    TRANSFORMS-filter_ports = allow_ports, drop_other_ports

    transforms.conf:

    [allow_ports]
    REGEX = (21|22|23|3389)
    DEST_KEY = _MetaData:Index
    FORMAT = your_index_name

    [drop_other_ports]
    REGEX = .
    DEST_KEY = queue
    FORMAT = nullQueue

    Issue Observed:

    • The expected behavior is that logs containing these ports should be routed to indexQueue, but they are not being filtered as expected.

    • All logs are still being indexed in the default index.

    • Checked for syntax errors and restarted Splunk, but the issue persists.

      Troubleshooting Steps Taken:

      1. Verified Regex: Confirmed that the regex .* (21|22|23|3389) .* correctly matches log lines using regex testing tools.

      2. Checked Splunk Logs: Looked for errors in $SPLUNK_HOME/var/log/splunk/splunkd.log but found no related warnings.

      3. Restarted Splunk: Restarted the service after configuration changes using splunk restart.

      4. Checked Events in Splunk: Ran searches to confirm that logs with these ports were still being indexed.

 

Request for Assistance:

Could you please advise on:

  • Whether there are any syntax issues in my configuration?

  • If additional debugging steps are needed?

  • Alternative methods to ensure only logs containing ports 21, 22, 23, and 3389 are routed correctly?

Your assistance in resolving this issue would be greatly appreciated.


Best regards,
Namchin Baranzad
Information Security Analyst
M Bank
Email: namchin.b@m-bank.mn

Labels (1)
0 Karma

richgalloway
SplunkTrust
SplunkTrust

We are not Splunk Support - we're users like you.

To properly troubleshoot an issue using regular expressions, we need to see some sample (sanitized) data.  Currently, I'm concerned that events with "22" in the timestamp will be sent to nullQueue.

The preferred way to specify the index for data is to put the index name in inputs.conf.  If the index name is absent from inputs.conf, data will go to the default index.

 

---
If this reply helps you, Karma would be appreciated.
0 Karma

livehybrid
SplunkTrust
SplunkTrust

Hi @Namchin_Bar 

I am suprised you are getting any data at all because drop_other_ports being second in the list will run AFTER the allow_ports and would set nullQueue for everything. You should set this first in the list and then 'allow_ports' second.

As it is, you're getting all the data which makes me think that neither are actually being applied.

Is your source:: value in props.conf definitely correct? 

Can you confirm if you are running these settings on a Universal or Heavy forwarder?

Is the data coming from another Splunk forwarder? Is this UF/HF?

 

🌟 Did this answer help you? Please help by:

  • Adding kudos to show if it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

0 Karma
Get Updates on the Splunk Community!

Automatic Discovery Part 1: What is Automatic Discovery in Splunk Observability Cloud ...

If you’ve ever deployed a new database cluster, spun up a caching layer, or added a load balancer, you know it ...

Real-Time Fraud Detection: How Splunk Dashboards Protect Financial Institutions

Financial fraud isn't slowing down. If anything, it's getting more sophisticated. Account takeovers, credit ...

Splunk + ThousandEyes: Correlate frontend, app, and network data to troubleshoot ...

 Are you tired of troubleshooting delays caused by siloed frontend, application, and network data? We've got a ...