I am not sure what is causing this behavior.
My table has 2369 rows.
I found this by using Splunk DB Connect Database Query with the following query:
SELECT SQL_CALC_FOUND_NUM_ROWS 1 FROM myTable
If I actually start to return row data in my query the number of rows returned starts to reduce for example.
SELECT * FROM myTable
Returns 1018 rows and
SELECT * FROM myTable where id > 900
Returns 180 rows.
So I have ruled out that it is a hard limit on my number of rows.
I suspected there was a setting in java.conf
$SPLUNK_ROOT/etc/apps/dbx/local/java.conf
That is limiting my query size. I have been through the "OMG so many settings" trying to identify those limiting query size and increased everything by an order of magnitude. This has been purely trial and error and has had zero effect. I ended up reinstalling Splunk and dbconnect from scratch, incase my actions had caused it. No effect.
If someone could explain what components are used by Splunk and DB connect to execute a MySQL query this would help give me other avenues to target. I.E.:
java.framework is used for XXX, MySQL_java_driver is used for XXX, etc, etc
Any help in lifting this limit would be greatly appreciated.
In my case I noticed the MAX_ROWS=1001
constant defined in:
{SPLUNK_HOME}\etc\apps\splunk_app_db_connect\bin\dbxquery.py
Basically query only returned up to 1001 rows. I increased it and it resolved my issue.
I hope they will fix this in a future release, but hmozaffari's post was extremely helpful.
Replace the original code in $SPLUNK_HOME$/etc/apps/splunk_app_db_connect/bin/dbxquery.py that are commented out by the # with the following.
This will allow you to set maxrows = 0 in your splunk query to make it unlimited.
Your dbxquery.py code should look like the below:
> Blockquote
#maxrows_opt = int(self.maxrows or 0)
#maxrows = MAX_ROWS
maxrows_opt = int(self.maxrows or MAX_ROWS)
maxrows = 0
parser = partial(parseEntry, abbreviate)
#if maxrows_opt and maxrows_opt < MAX_ROWS:
if maxrows_opt and maxrows_opt > 0:
maxrows = maxrows_opt
Is this a bug, a technical issue, or Splunk's way of limiting how much data we can query, because they want us indexing the data?
By default, you should be able to query hundred thousands of events in "database query" dashboard.
Basically, all default settings will locate in .../default/...conf and customized settings will be in .../local/...conf
You can compare if any changes made on java.conf in local and default.
Or only keep [java], [bridge] section in .../local/java.conf and remove/backup the rest, restart Splunk to see if this issue could be resolved.
I noticed the same behavior on DB Connect v2 using dbxquery command and maxrows parameter. It seems it ignores values more than 1000 and returns only first 1000 rows.
Reinstalled everything from scratch, deleted Splunk from applications folder, started again.
select * from myTable;
is still only returning 1018 rows.
select id from myTable;
returns 2369 rows
Deleted dbx app folder and reinstalled fresh from website.
Getting Java Bridge Server is not running.
ERROR Command output: None
Error talks about premature endo of file.
I put the origonal dbx folder back but this did not fix it.
I will try to fix that and then see if the limit is fixed.
I am going to try and reinstall splunk from start again.
The best way is to extract orginal db connect app to compare what you have modified in java.conf.
Or installing a fresh new db connect app and do changes only via UI.
I renamed java.conf to "java.conf copy" in .../default folder.
So now I only have java.conf in .../local folder.
Restarted Splunk with no effect. Still returning limited number of rows.
I also renamed java.conf to "java.conf copy" in .../local folder as well to see if the file was being used (I.E. did I find the correct one). I restarted and get "The Java Bridge server is not running". So I have the active java.conf file, it is the same as the "default" one packaged with the app".
It sounds like my problem is elsewhere, any ideas?
This limit is also in Splunk indexing. 1018 events found.