Hey there,
I'm trying to do two things and it looks like I can't. I have some fields with ugly names like "Current_SuccessPercent" that I want to rename. I also want to format the data in the field to be human readable. So I have the following search:
index=summary_foo
| stats sum(response_time) as response_time, sum(http_200) as http_200, sum(http_400) as http_400 by sourcetype
| eval Current_SuccessPercent = (http_200 / (http_200 + http_400)) * 100
| fieldformat "Response Time" = tostring(round(response_time,2),"commas")." ms"
| fieldformat "Success Ratio" = tostring(round(Current_SuccessPercent,0))."%"
| rename sourcetype as Sourcetype
In my results table this gives me two columns, Current_SuccessPercent and Success Ratio. The output is formatted correctly on Success Ratio but the sorting does not work correctly. And the formatting is incorrect on Current_SuccessPercent but I can sort by the values. If I use
| fieldformat Current_SuccessPercent = tostring(round(Current_SuccessPercent,0))."%"
The formatting is correct and I can sort appropriately. But then the column name looks bad and if I try to rename it I break formatting (I see two columns, one with my unformatted data, and one that is empty with the correct column name). Or if I try to rename before the fieldformat I cannot reference a column name with spaces in it inside the round method.
So can I not rename a column AND use fieldformatting AND be able to sort by that column?
Way late to this answer but hopefully it will help future searchers. Splunk seems to use hoisting with the rename command. Hoisting is when a variable or expression is hoisted to the top of the current scope and evaluated before anything else. When I was using the rename command it was at the end of my table after my fieldformat commands and resulted in incorrect output. Once I switched the variable names in my fieldformat commands to the renamed variables, my output was correct. ,Way late to this answer but hopefully it will help future searchers.
I used the table command with fieldformat to parse my data. I had my rename command after my fieldformat which then resulted in incorrect results. I believe that rename is hoisted similar to how javascript hoists variables to the beginning of their scope before evaluating them. So when I used rename, it was hoisted to the top of the table scope, my variables were renamed and then had fieldformat run on them resulting in incorrect results. Once you use rename you need to use the renamed variables in your fieldformat functions for the data to process properly.
Same issue. Is there a solution?
same issue, still i dont think there is a solution for it?
Not sure if this will answer your question but hopefully it will point some of us in the right direction. Many thanks to superstar Splunker Cindy McCririe!
My situation was similar but not identical which is why I'm not sure I will answer the original question.
renaming would fail with the same empty column, as would
fieldformat "Expected Value" = "$" + tostring("Expected Value","Commas")
but THIS works. . .
| eval "Expected Value" = round(Number * (Probability/100))
| fieldformat "Expected Value" = "$" + tostring('Expected Value',"Commas")
(Note the single quotes instead of double quotes in the fieldformat command)
| table Number, Probability, "Expected Value"
Hope that helps!
If I do the above and try to use the chart command, it fails. Charts are blank. Any workaround for this?
I'm also having this issue... Splunk team, are you out there?
Just wondering if this ever got solved?
Still appears to be an issue in 5.0.2.
I am having a similar issue where I lose the fieldformat if I do a rename and it has spaces in the field.
Using eval leaves it as a string so that "973 ms" appears before "9" - that breaks my ascending sort. I have also updated my example with your suggestion about the success ratio - I was doing that in my actual query but forgot to pull it into this simple example.
have you tried changing fieldformat to eval ?
I did that and was able to sort
also, Success Ratio (in my tests) came out to 1% for a 100% success, I ended up multiplying by 100 in the eval Current_SuccessPercent line to get a 100%
--
Jeremy