I am having a situation in which I need to tell my project that what would the JVM memory usage look like if the number of requests increased to 200 transactions per second for the next 15 minutes continuously.
00:00:01 = 200 transactions 00:00:02 = 200 transactions 00:00:03 = 200 transactions
all the way to
00:14:59 200 transactions
I know how to use Machine Learning, creating models, and training datasets in Splunk but I am not able to figure out how to tell them the exact heap usage mentioned above if the condition occurs.
This same situation has already occurred in my project in the past as well.
Any help would be really appreciable 🙂
I'm not sure I understand what you're asking. Are you having trouble expressing the gradual increase in total transactions in the form of features? Or have you trained a model but can't figure out how to compute the features in a streaming manner for the apply phase? Or something else?
Thanks for replying.
Forget the question for an instance.
Let’s say transaction per second and JVM Usage are co-related parameters and sudden increased in transactions will increase the JVM usage..
Now I have to calculate how my JVM usage will look like if my transaction count is 2k or 20k etc.?
I have got 3 months of data in indexes , so let’s say we trained our model after that using machine learning I used to predict JVM Usage taking transaction as the dependent parameter
for next 15 min or 15 days according to my requirement.
but here I am saying what if hypothetically my transaction count has increased to 2k or 20k then on the basis of its learning I want to predict how my JVM Usage going to behave for that hypothetically transaction count?
I hope it makes sense , feel free to reach me if you still need more clarification.
I think I understand your question. You have a model that predicts JVM size given transaction count, and you want to know the JVM size given a hypothetical transaction count, correct? If so, you could the makeresults and eval commands to generate simulated transaction counts, then pipe that to apply. Does that make sense?
Go to the Showcase in the ML Toolkit and run the Predict Server Power Consumption example under Predict Numeric Fields. That'll build a model that takes in utilization metrics and outputs predicted power consumption. Say you want to ask the what-if question "How much power would this machine draw if the disk utilization were pinned at 99%?" You can simply run the following:
| inputlookup server_power.csv
| eval total-disk-utilization = 0.99
| apply "example_server_power"