Splunk Search

Conversion errors with DatabaseLookupExecutor (Splunk DBX)

megancarney
Explorer

Someone kindly answered my question in the Splunk channel, but I thought I'd post the answer here since it didn't show up in my searches on the site.

I setup DBX to connect to a MySQL database that had a mapping between host IP addresses and the operating system on that host.

When I tried to the use the lookup table I got this error:

2014-04-10 11:07:35.376 dbx1929:ERROR:DatabaseLookupExecutor - Error while performing SplunkLookup DatabaseLookupExecutor: com.splunk.dbx.lookup.DBLookupException: Unsupported field SQL error while performing lookup: java.sql.SQLException: Cannot convert value "<redacted_host_ip>" to required datatype LONG com.splunk.dbx.lookup.DBLookupException: Unsupported field SQL error while performing lookup: java.sql.SQLException: Cannot convert value "<redacted_host_ip>" to required datatype LONG at com.splunk.dbx.lookup.DatabaseLookupExecutor.performSimpleLookup(DatabaseLookupExecutor.java:56) at com.splunk.dbx.lookup.DatabaseLookupExecutor.performLookup(DatabaseLookupExecutor.java:41) at com.splunk.runtime.SplunkLookup.invoke(SplunkLookup.java:26)

The search was:

sourcetype=proxy_logs | lookup KACE_machine_inventory host_ip OUTPUT os_name 

The problem was the hostip field in the database was stored as an unsigned int, and Splunk was trying to convert it to a LONG and failing.

To get around this error, you need to either change the lookup to a SQL query that casts the hostip field as a varchar before bringing that data into splunk OR change the data type in the column you're using to something Splunk likes import.

What I did was create a second hostip column (hostipstring) in the database that was a varchar instead of an unsigned int, and excluded the hostip column from the lookup table.

0 Karma

jcoates_splunk
Splunk Employee
Splunk Employee

hi, another way to do this is to use the SQL command CONVERT... e.g. SELECT CONVERT(varchar(10), host_ip),os_name FROM table_name. That way you're not messing with the schema in KACE's database, which can lead to performance or upgrade problems.

0 Karma

megancarney
Explorer

Excellent point. I should have been clearer. The database I edited wasn't KACE's internal database. The database I changed was a separate one that's populated with some of the data in KACE.

0 Karma
Get Updates on the Splunk Community!

Enhance Your Splunk App Development: New Tools & Support

UCC FrameworkAdd-on Builder has been around for quite some time. It helps build Splunk apps faster, but it ...

Prove Your Splunk Prowess at .conf25—No Prereqs Required!

Your Next Big Security Credential: No Prerequisites Needed We know you’ve got the skills, and now, earning the ...

Splunk Observability Cloud's AI Assistant in Action Series: Observability as Code

This is the sixth post in the Splunk Observability Cloud’s AI Assistant in Action series that digs into how to ...