Hi, I am wondering if anyone can speak from experience with using DB Connect for a large number of sql server instances, each of which installed (so about 2,000 separate instances total). We currently use this to track the ms sql error log, perfmon, etc.
We are hoping to use DB Connect 3 to poll information from a few of the dynamic management views for stuff like suspect pages, memory dumps, IO, etc. For our monitoring purposes, it would be great if we can do this using DB Connect instead of other solutions requiring a job to create a new log based off of these sql tables.
My question is how the connection pool support is/what it can handle. Looking at the conf file I see that there is the option to use a connection pool (useConnectionPool) and you can set maxTotalConn , which I think in our case would be the number of instances we have (2,000 as I mentioned before).
Can this be done at all? Is it reliable with these many instances? If so, how have you implemented it? Our concern is that it wouldn't be able to handle this many connections for the connection pool, and would result in a huge performance problem. Opening up individual connections for each instance every time is not an option.
as sql errors and performance are such common data source, we wrote sql error to a file and ingest with regular monitor stanza at this kind of scale.
we saved DB connect inputs for important data in table format.
you can always scale you HF layer and have as many as required with DB Connect on them to sustain all 2000 connections.
as i am not regarding the intervals you are pulling data at, or the volume of the data and the length of the queries, its tough to calculate how many HF youll need...
read here forward and plan according to your requirements and conditions: https://docs.splunk.com/Documentation/DBX/3.2.0/DeployDBX/Architectureandperformanceconsiderations