Splunk Dev

Search Query Optimizing

kunalsingh
Engager

Please help me to Optimize this Splunk Query

index:: rasp_

NOT [inputlookup Scanners_Ext.csv | fields forwarded_for]

NOT [inputlookup Scanners_Int.csv | rename ip_addr AS forwarded_for | fields forwarded_for]

NOT [inputlookup vz_nets.csv | rename netblock AS forwarded_for | fields forwarded_for]

NOT (forwarded_for="140.108.26.152" OR forwarded_for="" OR forwarded_for="10.*" OR forwarded_for=null) app!="" app!="\"*\"" app!="VASTID*" host!="10.215*" host!="ip-10-*" host!="carogngsa*" host!="carogngta*" host!="carofuzedd** host!="*ebiz*" host!="echo*" host!="not logged" host!="onm*" host!="tfnm*" host!="voip*" host!="wfm*" category!="Config*" category!="Depend*" category!="Stat*" category!="Large*" category!="Uncaught*" category!="Unvalidated Redirect" category!="License" category!="*Parse*" action=*

| stats count
0 Karma
1 Solution

bowesmana
SplunkTrust
SplunkTrust

Optimising this will depend on your data. Using subsearches with lookups can be expensive and using NOT with subsearches, even more so.

Depending on the volume of entries in those lookups you will be better off using a lookup, e.g.

index:: rasp_ NOT (
    forwarded_for="140.108.26.152" OR 
    forwarded_for="" OR 
    forwarded_for="10.*" OR 
    forwarded_for=null)
    app!="" app!="\"*\"" app!="VASTID*" 
    host!="10.215*" host!="ip-10-*" host!="carogngsa*" host!="carogngta*" host!="carofuzedd** host!="*ebiz*" host!="echo*" host!="not logged" host!="onm*" host!="tfnm*" host!="voip*" host!="wfm*"
    category!="Config*" category!="Depend*" category!="Stat*" category!="Large*" category!="Uncaught*" category!="Unvalidated Redirect" category!="License" category!="*Parse*" action=*

| lookup Scanners_Ext.csv forwarded_for OUTPUT forwarded_for as found
| where isnull(found)
| lookup Scanners_Int.csv ip_addr as forwarded_for OUTPUT ip_addr as found 
| where isnull(found)
| lookup vz_nets.csv netblock as forwarded_for OUTPUT netblock as found
| where isnull(found)

| stats count

so the static NOT statement and other != comparisons is part of the search and then you do each lookup in turn and if it's found then it will be discarded.

The order of the 3 lookups would be in likely match count order, so the first lookup should be done that would be expected to reduce the event count by the max, and so on.

Using NOT or all your != wildcard searches at the beginning will be somewhat expensive, you can use TERM() to reduce data scan count, but that requires knowing your data well.

 

View solution in original post

bowesmana
SplunkTrust
SplunkTrust

Optimising this will depend on your data. Using subsearches with lookups can be expensive and using NOT with subsearches, even more so.

Depending on the volume of entries in those lookups you will be better off using a lookup, e.g.

index:: rasp_ NOT (
    forwarded_for="140.108.26.152" OR 
    forwarded_for="" OR 
    forwarded_for="10.*" OR 
    forwarded_for=null)
    app!="" app!="\"*\"" app!="VASTID*" 
    host!="10.215*" host!="ip-10-*" host!="carogngsa*" host!="carogngta*" host!="carofuzedd** host!="*ebiz*" host!="echo*" host!="not logged" host!="onm*" host!="tfnm*" host!="voip*" host!="wfm*"
    category!="Config*" category!="Depend*" category!="Stat*" category!="Large*" category!="Uncaught*" category!="Unvalidated Redirect" category!="License" category!="*Parse*" action=*

| lookup Scanners_Ext.csv forwarded_for OUTPUT forwarded_for as found
| where isnull(found)
| lookup Scanners_Int.csv ip_addr as forwarded_for OUTPUT ip_addr as found 
| where isnull(found)
| lookup vz_nets.csv netblock as forwarded_for OUTPUT netblock as found
| where isnull(found)

| stats count

so the static NOT statement and other != comparisons is part of the search and then you do each lookup in turn and if it's found then it will be discarded.

The order of the 3 lookups would be in likely match count order, so the first lookup should be done that would be expected to reduce the event count by the max, and so on.

Using NOT or all your != wildcard searches at the beginning will be somewhat expensive, you can use TERM() to reduce data scan count, but that requires knowing your data well.

 

Get Updates on the Splunk Community!

Unlock Database Monitoring with Splunk Observability Cloud

  In today’s fast-paced digital landscape, even minor database slowdowns can disrupt user experiences and ...

Purpose in Action: How Splunk Is Helping Power an Inclusive Future for All

At Cisco, purpose isn’t a tagline—it’s a commitment. Cisco’s FY25 Purpose Report outlines how the company is ...

[Upcoming Webinar] Demo Day: Transforming IT Operations with Splunk

Join us for a live Demo Day at the Cisco Store on January 21st 10:00am - 11:00am PST In the fast-paced world ...