Dear community, I am new to Splunk DB and I am trying to understand a few things:
Context: I am trying to use Splunk DB as an interface for my data stored in Hudi or HDFS or Cassandra. I want to give the Splunk DB interface which can query this data and returns this date to a Splunk environment.
I have a few questions:
- I read that it is recommended to install Splunk DB on the heavy forwarder. If we only have access to the research head, is it possible to install it on search heads?
- in terms of indexing, is it required to have the Splunk indexing, or I can use the indexing of the other database?
- overall, my use cases will use Slunk DB just as an interface.
Thanks a lot
Hi @sebdon81,
in a distributed architecture, usually Search Heads makes the Search Head role not also the indexer.
For this reason in my opinion this isn't a good idea.
Anyway, sending data to indexers you have them available for searches, so, why do you want also a local copy? in addition, paying twice the license?
if one answer solves your need, please accept one answer for the other people of Community or tell me how I can help you.
Ciao and happy splunking
Giuseppe
P.S.: Karma Points are appreciated 😉
Hello @gcusello ,
First thanks a lot for taking the time to answer me. Very much appreciated.
I just wanted to be sure we are talking about the same software. In my question, I was referring to this software:
https://splunkbase.splunk.com/app/2686
Splunk DB Connect is a generic SQL database extension for Splunk that enables easy integration of database information with Splunk queries and reports. Splunk DB Connect supports DB2/Linux, Informix, MemSQL, MySQL, AWS Aurora, Microsoft SQL Server, Oracle, PostgreSQL, AWS
Is it the same think you are talking about?
Thanks a lot
Hi @sebdon81,
ok, sorry for the misunderstanding but is frequent for new users confusing Splunk with a DB!
Anyway, as I said, DB Connect is an app to extract data from a database and save extraction results in Splunk, It isn't efficient for on line extractions.
The usual use is to schedule an extraction using a query from one or more table of a database.
About which database to use, this depends on the jdbc driver available, here you can find the supported database list: https://docs.splunk.com/Documentation/DBX/3.10.0/DeployDBX/Installdatabasedrivers#Supported_database...
In this link there's all the useful documentation https://docs.splunk.com/Documentation/DBX/3.10.0/DeployDBX/AboutSplunkDBConnect, and in the YouTybe Splunk Channel, you can also find some useful video.
Ciao.
Giuseppe
@gcusello thanks again.
One more question: do you know if we can use local indexing on the search index?
Since I only have access to the search index, I wanted to be sure that I can move the data to Splunk and have local index.
Thanks for all your help.
Hi @sebdon81,
in a distributed architecture, usually Search Heads makes the Search Head role not also the indexer.
For this reason in my opinion this isn't a good idea.
Anyway, sending data to indexers you have them available for searches, so, why do you want also a local copy? in addition, paying twice the license?
if one answer solves your need, please accept one answer for the other people of Community or tell me how I can help you.
Ciao and happy splunking
Giuseppe
P.S.: Karma Points are appreciated 😉
Hi @sebdon81,
good for you, see next time!
Ciao and happy splunking
Giuseppe
P.S.: Karma Points are appreciated 😉
Hi @sebdon81,
I think that there's some misunderstanding in your question:
at first the Spluk name is "Splunk Enterprise" and not "Splunk DB" also because Splunk isn't a DB! Splunk is a search engine, you have to see it as Google more than a database.
In second time, Splunk could work as an interface with external systems, but it doesn't work in this way, also because is should be very slow and less performant than other tools: it can ingest data from these data source but it isn't a querying interface.
In few words, it works in this way:
As I said, if you want to use it to search data from external sources (Hudi or HDFS or Cassandra) you have to ingest them in Splunk and then search in these data, not in the original sources.
Anyway, answering to your questions:
I hint to see in the YouTube Splunk Channel some video that describes as Splunk works.
Ciao.
Giuseppe