I am using DB Connect 3.1.1 to get data from a Microsoft SQL Server 2014 database. I am required to collect the results of the invocation of some stored procedures.
According to the documentation, dbxquery command supports stored procedures in databases beginning with version 3.0.0. However I am not able to configure a DB input based on a stored procedure.
In fact db_inputs.conf.spec only admits a "query" parameter (while dbxquery command allows the use of a "procedure" parameter"). If i try to "trick" the "Edit Input" GUI, using a SQL statement that contains "exec , I am able to retrieve results, but columns are returned in alphabetic order and "Rising column" / "Timestamp column" dropdowns do not contain selectable values. Finally, if I put the same "exec " in db_inputs.conf, I get some errors due to missing checkpoint file (I created it manually to solve the issue) and missing metadata files (no idea where they should be and I stopped, because I don't want to do heavy "hacking" of the app).
Any suggestions on how to create a DB input based on a stored procedure?
Yeah GUI support is not so good for sprocs. (UPDATE: If you must use the GUI see the answer below from @ehughes100 for a hack, otherwise read on)
What may be happening (and this is my experience) is that dbx 3.1 introduced some terrible behaviour with regards to column ordering,
You used to be able to use tailrisingcolumnname & inputtimestampcolumnname for inputs, but after 3.1 they replaced this with inputtimestampcolumnnumber & tailrisingcolumnnumber. Now the docs say that the "name" options are deprecated, but in fact when you start splunk dbconnect performs a migration of your inputs to use the ordered column number. I haven't deep dived but I presume this is done by creating DDL statements based on the table names in your queries. However, because you are using a stored procedure, there are no table names! So you will see errors like this in your dbx logs
2017-11-16 01:54:52.324 -0800 [QuartzScheduler_Worker-32] ERROR c.s.d.s.dbinput.recordreader.DbInputRecordReader - skip running input foo because runtime migration failed
To solve this problem I had to unwrap the sproc and create the actual underlying query as an input, then copy what I saw for tailrisingcolumnnumber to the dbinputs.conf file.
To be clear,
Step 1: Get the full stored procedure from your friendly DBA
Step 2: Create an input using the sproc code (you may need to put in dummy parameters)
Step 3: Go into the dbinputs.conf and find out what the actual tailrisingcolumnnumber is. You should see something like:
[sproc_input_full] connection = foo index = bar mode = rising query = SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; \ begin transaction; select foo.id as foo,CAST(etc etc ...;\ commit transaction; index_time_mode = dbColumn input_timestamp_column_number = 2 tail_rising_column_number = 1
Step 4: manually make an dbinputs.conf entry for the sproc using the tailrisingcolumnnumber from 3. You should end up with something like this:
[sproc_input] connection = foo index = bar mode = rising query = exec name_of_sproc ?; index_time_mode = dbColumn input_timestamp_column_number = 2 tail_rising_column_number = 1
Step 5: (As Recommended by @richfez) Get a DBA to comment the sproc and say
##UNDER NO CIRCUMSTANCES CHANGE THE COLUMN ORDERING OF THIS SPROC WITHOUT CONTACTING SPLUNK ADMIN
There's some excellent support available in the splunk slack, check the #dbconnect channel
So reading this i feel that this is still something which is not supported in the current version, having a stored procedure as input?
Yeah I don't think its supported as an input. But if definitely works - you just can't set it up through the GUI
Confirmed that as of 3.0 stored procedures for inputs was not supported, only for queries. This is because there is no way to stream variables into an input, so the only type of stored procedure that would theoretically work for an input is a variable-less batch or a simple rising column (as shown here). Clever hack, but not supported to my knowledge.
This answer is a little late for @mas
Here is my GUI hack. Put the following in the DBConnect query window.
if 1=2 select getdate() as EventDateTime Exec GetDataForSplunk ? if 1=2 select * from Bogus where EventDateTime > 'bogus' order by EventDateTime
Assume EventDateTime is the column you want to use as a rising column from your results of stored procedure GetDataForSplunk. Setting the query
if 1=2 select getdate() as EventDateTime allows the column EventDateTime to show up in your rising column and/or Timestamp pick list. DBConnect generates its picklist from a
FMTONLY sql command on the query it sees and it pulls the column names from the first result set. You can have multiple column names if you want to have different columns used for the rising column and timestamp. Just add it after
if 1=2 . Example....
if 1=2 select getdate() as EventDateTime, getdate() as TimeStampColumnName. THIS ONLY WORKS IF THE FIELD NAMES ARE IN THE PROPER COLUMN ORDER AS THEY WOULD APPEAR IN THE SQL RESULT OF THE STORED PROCEDURES RESULT SET. IF IT IS NOT, YOU HAVE TO PUT IN MORE BOGUS COLUMNS IN FRONT OF IT. If EventDateTime is 4th and TimeStampColumnName is 6th then use ->
if 1=2 select 1 as Bogus1, 1 as Bogus2, 1 as Bogus3, getdate() as EventDateTime, 1 as Bogus5, getdate() as TimeStampColumnName.
The next part is where you execute your stored procedure adding ? as an input that your stored procedure needs to expect for your rising column.
if 1=2 is to get around the check for making sure you you have an "order by" clause in your query.
As jplumsdaine22 indicated, if the stored procedure changes and it affects the placement number of the fields you chose, you will need to fix your first
if 1=2 query to match that order. You can't look at the column order that Splunk displays the results in (alphabetically) to know the column order.
Bogus used above can be whatever you want.