So, I've crafted a query that I thought would be working, but due to the nature of floating point numbers in Splunk, it's not working...
Basically, my setup is as follows: I have a field "some.long.buried{}.field" (renamed because eval complained) that contains values ranging from 0.0 to 1.0 depending on an output of my system. If something happens (error, exception, warning, etc) then that value will get logged as -1. I am attempting to bucket it into decimals from 0 to 1 and then two catch all buckets for values less than 0 (errors) or values greater than 1 (who the heck knows, but better to have it!)
What I expect is a count of each of the buckets. However, I am finding that only the values with -1 are getting caught. Upon further poking around in documentation, it seems to be the functionality of floating point and the nuances that come with that? My values are quite long (0.04716907849197179 as an example) so I am assuming this is what is going on. I've read the following posts already and tried to figure out how to get the result I'm looking for but it doesn't seem to be working... Any help would be very appreciated!
(I've been told I can't post links, but I've read the posts on comparing floating point numbers (search vs where), the eval and bin documentation, and also bucketing fields that are floating point values). You think that last one would've helped more than it did!
index=myindex "some.long.buried{}.field"="*" | rename some.long.buried{}.field as testing | eval bucket=case(testing<0,"x<0",testing<.1,"0<=x<.1",testing<.2,".1<=x<.2",testing<.3,".2<=x<.3",testing<.4,"3.<=x<.4",testing<.5,".4<=x<.5",testing<.6,".5<=x<.6",testing<.7,".6<=x<.7",testing<.8,".7<=x<.8",testing<.9,".8<=x<.9",testing<1,".9<=x<1", testing>=1,".x>=1") | stats count by bucket
... View more