Your question is not something Splunk can calculate for you.
Failure of hardware is (eventually) inevitable.
If you store one copy of data on a disk it will at some point in the future be lost.
How long this is likely to take depends on many (many) factors, but lets assume you only consider HDD failure.
What is the SLA on your hard disk? - You don't have one. Manufacturers may warrantee or provide a MTBF value, but not an SLA.
SLAs are offered by estimating likely failures over time and cost, the basic HDD failure example can be improved by adding additional HDDs and/or using RAID. You can start to wrap SLAs around raid sets be ensuring you have sufficient spindles in your array and sufficient human resources (technicians, ops staff, stock management, replacement parts logistics etc) to maintain the solution.
Splunk is exactly the same. The Splunk architecture allows you to design for High Availability and Fault tolerance, how far you take it becomes a business decision based on cost.
Commonly, Splunk Architects will deploy a solution based on:
- your desired "search range" (how long do you need to keep data searchable),
- your overall retention period (how long must you keep data for, even if not immediately searchable)
- your desired search and replication factor (how long does a failure take to remedy, and how many Splunk server failures should the deployment tolerate)
Its a very complicated question, and not something Splunk, PS, your Ops Team or your Hardware vendor can answer on their own.
(Even then, SLA accuracies are frequently contested and argued in my experience)
If data durability is of significant concern to your business, you could do a lot worse than use Splunk SmartStore. This writes your data to an Amazon s3 bucket (or compatible storage platform)
If you use AWS, you will get a data DURABILITY (not availability) of 99.99999999% (11 9's)