Hi all,
First off, some details. I have a script job running every 60 seconds to poll the processes in the servers and I'm trying to do a trending graph of the CPU% usage.
The ok.png is what I would like to see, but I'm getting the one in the problem.png. However, when I change the timeline from "last 4 hours" to something else, the graphs changes.
I understand that the problem is with my search, but what is the proper stats function to use?
Your both searches are differ by the window you select, otherwise both of your results are same. In the first search , you have selected to view sum by an interval of 1m whereas in the second one you haven't selected any time span and hence splunk has assigned it's default value for four hours. If you don't mention the time span, splunk selects appropriate span based on the time range you select.
So it's up to you how you want to see the result, ie; if you want to see them in an interval of 1m or 10m or 1 hr and so on. Based on that you set the time span.
Hi carrotball
These two search codes are the same the difference is between the time range which is difference. Note that the visualisation change with the time range.
Can you share the script you run? Sounds like this would be very useful to have in place.
Your both searches are differ by the window you select, otherwise both of your results are same. In the first search , you have selected to view sum by an interval of 1m whereas in the second one you haven't selected any time span and hence splunk has assigned it's default value for four hours. If you don't mention the time span, splunk selects appropriate span based on the time range you select.
So it's up to you how you want to see the result, ie; if you want to see them in an interval of 1m or 10m or 1 hr and so on. Based on that you set the time span.
I'm getting, " These results may be truncated. This visualization is configured to display a maximum of 1000 results per series, and that limit has been reached."
Will changing the limits for that will affect splunk or the memory of the pc used to view the graph, whether it will consume more CPU/MEM etc.
If you don't want the sum per minute always, the it's better to leave to splunk to automatically set the value for you. That's the easiest. If not set a token and set the span based on the time range or workaround like below and set appropriate range and values
timechart sum .. [stats count | addinfo | eval range = info_max_time - info_min_time | eval span = "span=".case(range < 4000, "5m", range < 90000, "1h", 1=1, "12h") | return $span]
Thanks for the help!
You are welcome. Please mark as 'answer' if you are satisfied to close the thread
i see. i had to put span=1m in one to force it to produce the results i want. i thought it might just be the maths i was using is wrong(sum). so theres no way to use a single function which can produce the same output regardless of the time span i chose?
Since you want the aggregation on time basis, and also ion fxied time of 1m, it's the same command you have to use. ie
|timechart span=1m sum ...
There are other ways also (|bucket span=1m|stats sum(...) by _time) but above is the better option.
Hello! Did you nottice that your both search queries are different? ok.png
is using the span=1m
, but problem.png
is not using it