All Apps and Add-ons

DB Connect: How to troubleshoot dboutput error "Cannot insert duplicate key in object" after server outage?

phoenixdigital
Builder

Hi All,

We had a server shutdown due to a power failure and now a saved search for inserting data into a database no longer works.

sourcetype="myStuff" earliest=-6m latest=-1m | eval time = _time | eval table_field2 =  round(SplunkField2,2) | rename Splunkfield3 AS table_field3 | rename Splunkfield4 AS table_field4 | fields time, table_field2, table_field3, table_field4 | table time, table_field2, table_field, table_field3, table_field4 | dboutput type=insert database="MSSQL Instance" table="TableForData" time, table_field2,table_field3,table_field4

OK the table in the database has a primary key based on

time, table_field2, table_field3

Now remember this saved search worked perfectly prior to the power outage but now I get this error.

command="dboutput", Unexpected error while performing dboutput: java.sql.BatchUpdateException: Violation of PRIMARY KEY constraint 'PK_Name_of_Primary_Key_On_Database'. Cannot insert duplicate key in object 'TableForData'. The duplicate key value is (Oct 28 2014 10:07AM, Field2Data, Field3Data).

Is there somewhere on the local Splunk instance where primary keys are cached that I need to delete?

For the record the database has no data in it since the power outage (2 days ago) so there is definitely no primary key conflict.

I have tried restarting Splunk and the server but it had no effect.

Any ideas?

0 Karma
1 Solution

phoenixdigital
Builder

OK I upgraded Splunk and the DBX app and things are back to normal again.

The original search (database injection) is working as expected.

After updating the DBX app I had to run setup again for the app which I suspect removed whatever corrupt cache data file was there.

If a DBX support person can tell us where the DBX app might have been caching information that would be great. I am certain if I could have just deleted the relevant corrupted file this would have been resolvable within minutes not 10 days.

View solution in original post

phoenixdigital
Builder

OK I upgraded Splunk and the DBX app and things are back to normal again.

The original search (database injection) is working as expected.

After updating the DBX app I had to run setup again for the app which I suspect removed whatever corrupt cache data file was there.

If a DBX support person can tell us where the DBX app might have been caching information that would be great. I am certain if I could have just deleted the relevant corrupted file this would have been resolvable within minutes not 10 days.

pmdba
Builder

Your Splunk search must be producing duplicate results or the database wouldn't complain. Try running the search without the dboutput portion and confirm that the data is what you expect (i.e no primary key violations). If you are getting duplicate results, refine your search to eliminate them and then try the dboutput portion again.

0 Karma

phoenixdigital
Builder

I thought you were on to something there as that makes perfect sense.

However adding a dedup did not fix the issue.

sourcetype="myStuff" earliest=-6m latest=-1m | dedup _time, SplunkField2, SplunkField3 | .... the rest of the search...

There must be some cache file somewhere that dbx is using. This search (database injection) worked perfectly prior to the power supply failure.

I even upgraded to Splunk 6.2 but the issue is still there.

I am trying to submit a support ticket but the Splunk support portal is broken for my clients account. It has been about 2 days now and they still havent resolved access to that.

0 Karma

bkondakindi
Path Finder

try to run below command authorize your DB , it will fix the issues

/splunk cmd python /opt/splunk_nas/etc/apps/dbx/bin/dbx_shpinst.py hostname:8089 --user admin --db Teradata_TW001

0 Karma

phoenixdigital
Builder

Sadly this did not work.

/opt/splunk/bin/splunk cmd python /opt/splunk/etc/apps/dbx/bin/dbx_shpinst.py localhost:8089 --user myuser --db "MSSQL Instance"
splunk password:
database password:
Password at myserver.mydomain set successfully.

Are there any files being cached anywhere?

I even tried creating the DB connection again but it still fails.

We can manually run the INSERT statements against the database too btw.

0 Karma

phoenixdigital
Builder

I even just tried this and got the same error.

sourcetype="myStuff" earliest=-6m latest=-1m | eval time = strftime(_time,"%Y-%m-%d %H:%M:%S") | eval table_field2 =  round(SplunkField2,2) | rename Splunkfield3 AS table_field3 | rename Splunkfield4 AS table_field4 | fields time, table_field2, table_field3, table_field4 | table time, table_field2, table_field, table_field3, table_field4 | dboutput type=sql database="MSSQL Instance" "INSERT INTO TableForData (time, table_field2,table_field3,table_field4) values ($time$, $table_field2$,$table_field3$,$table_field4$)"
0 Karma

phoenixdigital
Builder

I should also note that even running this search manually gives this error. It is not just the saved search.

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...