Splunk uses a proprietary data store called an index which consists of raw files. It is nothing like a conventional DB. Here is a good explanation of what an index is and how Splunk stores data:
Here is a good explanation of how data is stored in the index:
MongoDB is used by Splunk to facilitate certain internal functionality like the kvstore but is by no means where data is stored as it is ingested from Universal Forwarders etc. Data that is ingested from external sources all goes to an index as specified in your configuration.
Splunk might be using mongDB database, not sure even i want confirmation.
Why mongoDB: Coz i have seen the process named mongoDB running when indexer starts or restart.
Also source = C:\Program Files\Splunk\var\log\splunk\mongod.log with index=_internal*
Yes, Splunk developed their own on-disk storage format from "zero". (If you call having a C++ compiler and standard libraries "zero") From an architecture perspective, there are large differences between an ACID-capable generalized RDBMS and (essentially) a search engine's data storage. Splunk does not have (and does not need) many of the features a relational DB has. Also, most relational DB's full-text search are ugly side-additions. The Splunk developers were able to make an on-disk data format that meets their needs exactly.
Splunk does not use arealtional database to store events and indexes.
The storage is all flat file based.
Please have a look here:
Hope that answers your question?