Monitoring Splunk

Parameters and variables in dbxlookup DB Connect (v3.1.3)

dgelo
Explorer

Hi,

I have a problem how to pass parameters (variables) to dbxlookup (v3.1.3). I need pass variable VAR from Splunk search to SQL WHERE condition in DB Lookup (dbxlookup).

For example:

| stats count
| eval VAR = "SOMEDATA"
| dbxlookup connection="oracle_con" query="SELECT ID, DATA FROM MyTable WHERE DATA LIKE '%$VAR$%'" "VAR" as "VAR" OUTPUT "ID"

In dblookup (v1) same parameters is named as input_fields.

Can I pass 2 and more parameters? Maybe have workaround using dbxquery command or some other?

1 Solution

dgelo
Explorer

Hi everyone!

I have workaround solution of my issue. I use external lookup with python script. This comment helped me https://answers.splunk.com/answers/2580/i-want-sample-code-to-connect-to-oracle-database-and-lookup-...

Firstly need setup cx_Oracle for Python. In Linux system Python for Linux and Python for Splunk is different environments. I have installed cx_Oracle for system Python (Linux). Also I installed Oracle Client for Linux (https://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html)

In my App/bin path need put python script dblookup.py
import csv
import sys
import cx_Oracle
import os
os.environ["NLS_LANG"] = "Russian.AL32UTF8"

def db_connect():
    dsn = cx_Oracle.makedsn("host", port, service_name="service_name")
    db = cx_Oracle.connect("user", "password", dsn)
    cursor = db.cursor()
    return cursor

def db_lookup(cursor, key):
    cursor.execute("""
    SELECT ID, DATA FROM MyTable WHERE DATA LIKE '%:id%'""", dict(id=key))
    row = cursor.fetchone()
    return row

def main():
    inputfield = sys.argv[1]

    infile = sys.stdin
    outfile = sys.stdout

    r = csv.DictReader(infile)
    header = r.fieldnames

    w = csv.DictWriter(outfile, fieldnames=r.fieldnames)
    w.writeheader()

    cursor = db_connect()

    for row in r:
        key = row["DATA"]
        try:
            db_row = db_lookup(cursor, key)
            if db_row:
                (ID, DATA) = db_row
                row["ID"] = ID
                row["DATA"] = DATA
                w.writerow(row)
        except:
            err=1
    cursor.close()
main()

In my case cx_Oracle is not working in Python Splunk therefore I use wrapper.py script.

import os, sys
python_executable = "/usr/bin/python"
os.execv(python_executable, [ python_executable, "/opt/splunk/etc/apps/my_app/bin/dblookup.py" ] + sys.argv[1:])

You can testing python script like this:
1. Create CSV file test.csv
2. Fill file with data

ID,DATA
,TEST
,DATA
  1. Run python script in command line Run with python Linux [ok] python dblookup.py DATA < test.csv

Run with python Splunk [not work - error import cx_Oracle]
/opt/splunk/bin/python dblookup.py DATA < test.csv

Run with python Splunk wrapper.py [ok]
/opt/splunk/bin/python wrapper.py DATA < test.csv

Then I create new External Lookup Defenition in Splunk.
Settings » Lookups » Lookup definitions
Name: my_db_lookup
Type: External
Command: wrapper.py DATA
Supported fields: ID,DATA

Finally my search run

| stats count
| eval VAR = "SOMEDATA"
| lookup my_db_lookup DATA as VAR OUTPUT ID, DATA

View solution in original post

0 Karma

dgelo
Explorer

Hi everyone!

I have workaround solution of my issue. I use external lookup with python script. This comment helped me https://answers.splunk.com/answers/2580/i-want-sample-code-to-connect-to-oracle-database-and-lookup-...

Firstly need setup cx_Oracle for Python. In Linux system Python for Linux and Python for Splunk is different environments. I have installed cx_Oracle for system Python (Linux). Also I installed Oracle Client for Linux (https://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html)

In my App/bin path need put python script dblookup.py
import csv
import sys
import cx_Oracle
import os
os.environ["NLS_LANG"] = "Russian.AL32UTF8"

def db_connect():
    dsn = cx_Oracle.makedsn("host", port, service_name="service_name")
    db = cx_Oracle.connect("user", "password", dsn)
    cursor = db.cursor()
    return cursor

def db_lookup(cursor, key):
    cursor.execute("""
    SELECT ID, DATA FROM MyTable WHERE DATA LIKE '%:id%'""", dict(id=key))
    row = cursor.fetchone()
    return row

def main():
    inputfield = sys.argv[1]

    infile = sys.stdin
    outfile = sys.stdout

    r = csv.DictReader(infile)
    header = r.fieldnames

    w = csv.DictWriter(outfile, fieldnames=r.fieldnames)
    w.writeheader()

    cursor = db_connect()

    for row in r:
        key = row["DATA"]
        try:
            db_row = db_lookup(cursor, key)
            if db_row:
                (ID, DATA) = db_row
                row["ID"] = ID
                row["DATA"] = DATA
                w.writerow(row)
        except:
            err=1
    cursor.close()
main()

In my case cx_Oracle is not working in Python Splunk therefore I use wrapper.py script.

import os, sys
python_executable = "/usr/bin/python"
os.execv(python_executable, [ python_executable, "/opt/splunk/etc/apps/my_app/bin/dblookup.py" ] + sys.argv[1:])

You can testing python script like this:
1. Create CSV file test.csv
2. Fill file with data

ID,DATA
,TEST
,DATA
  1. Run python script in command line Run with python Linux [ok] python dblookup.py DATA < test.csv

Run with python Splunk [not work - error import cx_Oracle]
/opt/splunk/bin/python dblookup.py DATA < test.csv

Run with python Splunk wrapper.py [ok]
/opt/splunk/bin/python wrapper.py DATA < test.csv

Then I create new External Lookup Defenition in Splunk.
Settings » Lookups » Lookup definitions
Name: my_db_lookup
Type: External
Command: wrapper.py DATA
Supported fields: ID,DATA

Finally my search run

| stats count
| eval VAR = "SOMEDATA"
| lookup my_db_lookup DATA as VAR OUTPUT ID, DATA
0 Karma

richgalloway
SplunkTrust
SplunkTrust

@dgelo, If your problem is resolved, please accept the answer to help future readers.

---
If this reply helps you, Karma would be appreciated.
0 Karma

highsplunker
Contributor

Hey guys. Why don't anybody comment on this? Especially from Splunk Inc.

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...