Monitoring Splunk

"df" ave(used) not giving consistent results

hartfoml
Motivator

I have this search to find disk space use over time;

`index="os" sourcetype="df" host=All_My_Servers | multikv fields FileSystem, Size Used Avail UsePct MountedOn | search MountedOn=*index* | timechart span=1d avg(Used) by host`

this is the event data;

`/dev/mapper/node3_vg-index3_lv   xfs     5.5T        3.6T        2.0T         65%    /mnt/index3`

So the avg(used) shows as 3.6
On some systems I have this event data;

`/dev/mapper/index2_vg-index2_lv   xfs     1.0T         50G        974G          5%    /mnt/index2`

in this event the avg(Used) shows as 50000000
I tried to separate the host that have Terabytes and Gigabytes in different searches but when I do this | eval GB_Used=(Used/1024/1024) I get no results

Anyone know why this is happening or how I could get both system with T and G drive use on the same chart. or maybe there is a better way all together.

Tags (2)
0 Karma
1 Solution

gathiu
Engager

Hi,

Well, I would focus directly on the source of data - why there is TB or GB? Why not only one "size"?
I would recommend to switch from 'df -h' to 'df -k' to get one "size", otherwise you got in such topics as "how to convert one value to be comparable with other one" ...

Other topics the "percentage" - better for comparing or setting the thresholds/limits ...

View solution in original post

0 Karma

hartfoml
Motivator

Linu you should have put this as an answer so I could have selected it as the answer.

Thanks I will use the % filled

0 Karma

gathiu
Engager

Hi,

Well, I would focus directly on the source of data - why there is TB or GB? Why not only one "size"?
I would recommend to switch from 'df -h' to 'df -k' to get one "size", otherwise you got in such topics as "how to convert one value to be comparable with other one" ...

Other topics the "percentage" - better for comparing or setting the thresholds/limits ...

0 Karma

linu1988
Champion

I would say to go with the Percentage, It's more appropriate and user friendly to understand. There are workarounds with replacing and converting them into GB and TB bytes.

Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...