We are planning to deploy Splunk in our environment of nearly 5,000 devices. As per the plan, there will be around 12 indexers and 6 Search heads.
Have any of you guys used any generic test plan to make sure that everything is working as per the need?
Also, what are the things we need to be tested - like what we need to test it under Unit testing, System test etc? It would be much appreciated if you can share some test plan for the same.
for the moment there no documentation on this planing resources . the only documentation we have for now is the capacity planning manual which is not interested on what you need i havelook every where on the internet i have not see some thing that can be interest on what you need
i think Large enterprise can help
A large enterprise deployment handles functions across the enterprise, spanning
multiple data centers. These deployments might consist of:
-A large number of Splunk instances; for example, several dozen indexers
and as many as 10 search heads.
-Indexing volume ranging from 300GB to many TBs per day.
-Many thousands of forwarders.
-Updates handled by a separate configuration management tool, either a
stand-alone deployment server or a third party tool like Puppet or Chef.
-A large number of users, potentially numbering in the several hundreds.
Hi Juvetm, Currently there is no Splunk in our environment. We have created a test environment with forwarders, indexers and search heads. In our test phase, we are forwarding 3 syslog servers for indexing and testing whether the indexing is happening as expected (like host, source type etc). This are some basic things we are currently testing. In order for us to put the same configuration in the live, we have to have a test plan to make sure that everything is working fine as expected. This must include all possibilities that we are going to have in the live...
As far as I know, we can run certain queries to make sure that things are working. So I am just checking whether there is any standard template available.
Please let me know if you got any solution or template. Currently we are looking on this and didn't find any ways to move, if possible could you please share some insight.
Thanks in advance.
I can give you some high level approach we have used.
We did only the Functionality testing
Use ssh -v -p inorder to test the connectivity
Deployment server connectivity to Heavy FWD and Universal FWD (for UF, we have used few Windows and *nix machines
Connectivity between Indexers, SH and FWD's
Connectivity between Indexers and CM
Connectivity between Deployer and SH Clusters
Connectivity between staging server and Deployer
Connectivity between Indexers and License Master
Check whether the users has permission (say, Network team can only access network indexes and they dont have admin access etc)
Service now connectivity
Ticketing system connectivity to check whether the tickets are automatically raised for the alerts
Qualys API connectivity
Cyber Threat Intelligence connectivity
Cold storage on Indexers - check whether the indexes are moved from warm to cold storage
A big piece of test work - Based on different sourcetypes whether the index time and search time extractions are done properly.
Note: you can do performance testing, UAT /OAT testing etc.
Thanks juvetm for your prompt response. We have already did the architecture design plan, which covered what you have told me.
We are currently looking for some test plans - system testing, performance testing, integration testing etc. I am looking for some documentation which will cover the testing plans. The testing team want to do all these testings before we put it in the production.