All Apps and Add-ons

How can I setup a resilient DB Connect input with Splunk Cloud?

gf13579
Communicator

I have Splunk Cloud and I need to ingest data from a database. I have this working fine in a lab - with one standalone Splunk box using DB Connect.

I'd like to deploy this in a resilient configuration for a customer. We're installing a pair of Heavy Forwarders to ensure resilience for syslog and UF inputs, but how can I achieve a resilient configuration for my DB Connect source?

DB Connect stores the most recent rising column value in an input-specific file under /opt/splunk/var/lib/splunk/modinputs/server/splunk_app_db_connect. Surely the only way of making this work would be for this value to be shared across two hosts running DBConnect?..

kozanic_FF
Path Finder

Know this is an old question and I'm assuming a solution may have been found by now, but given that I had to deal with a similar situation recently, I thought I'd add the solution we put in place.

We had 2 servers which were set-up as a high availability cluster, which from a Splunk point of view is OK - which ever server is running will forward the logs, but as mentioned, with DB connect there is the rising column files to factor in.

Solution for us here was to have a floating drive set-up that would move with the active node. We installed Splunk onto this drive.

0 Karma

gf13579
Communicator

so you installed a full instance Splunk - to act as a heavy forwarder - on the cluster nodes?

0 Karma

kozanic_FF
Path Finder

Yes, using full Splunk Enterprise we completed the install on one node into the floating drive first.
Then fail over so the floating drive moved to newly active node and installed again (to same location on floating drive).
Then just update inputs.conf and server.conf with an entry for host / server name that makes sense - will be the same no matter which node is active.

You just have to make sure that Splunk doesn't try to start before the floating drive has been made available to the node.

0 Karma

esix_splunk
Splunk Employee
Splunk Employee

Currently there is no way to have HA with Splunk + DBX internally.

The best way our customers have approached this is to use some form of OS based clustering, such as Red Hat Clustering etc. Then you can have the HA based at the OS level...

0 Karma

jingwangmskcc
Engager

Is it possible now for HA with Splunk + DBX after 5 years? Any new possible solutions? 

0 Karma

weliasz
Splunk Employee
Splunk Employee

High availability for DBX on Search Head Cluster on classic stack will be added in version 3.13 which will be released soon. It was introduced on Cloud SHC in version 3.11.

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...