Hi
I am doing an application in Splunk that processes that processes 200K records per second fetched from Hadoop. What is the sizing that I need to look at for the licensing. I could see in Hunk that virtual indexes are created for Hadoop data to be processed. Does the indexing of data using these virtual indexes count on the per day data limit in Hunk license. Can someone help me on this
Thanks in Advance
Regards
Subbu
Subbu, Hunk is licensed based on the size of your Hadoop cluster, in terms of the number of nodes in your cluster (Task Trackers specifically). We are processing the data in place in Hadoop, and you need only a Hunk Search Head to point towards the Hadoop cluster to process that data. You shouldn't need to care about data size or events. We assume you are scaling your Hadoop cluster according to your own needs in terms of storage and processing.
Subbu, Hunk is licensed based on the size of your Hadoop cluster, in terms of the number of nodes in your cluster (Task Trackers specifically). We are processing the data in place in Hadoop, and you need only a Hunk Search Head to point towards the Hadoop cluster to process that data. You shouldn't need to care about data size or events. We assume you are scaling your Hadoop cluster according to your own needs in terms of storage and processing.
thanks very much. this info will be very useful for me
No, we do not care about data volume in Hunk. It is only based on the size of your cluster.
Thank you very much. I have another question. When you say Hunk license is based on size of the hadoop cluster, is there any limit to the amount of data that hunk can handle per day as per the licenses?