Could you give us more of your use case?
For example, if you were a student who is just trying to figure out the easiest way to use the automatically generated test data in the demo version of Splunk to run a sample data visualization, then the answer is this...
On the other, hand a production shop that needs to achieve sub-second response time for mission-critical queries might ask that same question with a completely different mindset.
If the above doesn't answer your question, then here's some of the stuff you have to fill in for us so we can hone in on what you need to know.
Don't worry about answering ALL of the questions, just give us an idea what you are trying to do.
What kind of data? Do you want to copy all the data, do you want to execute searches, or clone and offload ALL the data?
What are you trying to achieve? What is your current infrastructure? What is your current expertise?
When you ask "convenient" and "rapid", do you mean "requiring the least effort to set up" or "returning the results of inquiries most rapidly"?
Thanks for your reply, I'll try to clarify my question.
1) Rapid means "requiring the least effort to set up".
2) I want to build ML system on top of the production data from splunk. So I need to extract a lot of data by query at first time and after that I need to extract new data constantly as it's available. So I can't do this manually.