Splunk Search

Timechart bug.

Contributor
index="iedss_was_prd" OR index=iedss_mule_prd 
| rex field=source "(?P<logType>[^\\\]+)$" 
| eval raw_len=len(_raw) 
| eval raw_len_mb = raw_len/1024/1024 
| eval raw_len_mb = round(raw_len_mb,2) 
| timechart span=1d useother=false sum(raw_len_mb) as MB by logType limit=0 

there is clearly a bug in timechart.
I have around 70 logTypes.
After running the above query across 7 days - in result some logtype column values are ZERO.
If I add |eval logType ="thatlogtype*"
then the result is right.

Thoughts

Tags (1)
0 Karma
1 Solution

Champion

hi @reverse
you are calculating the raw length in gb for 1 event and then rounding it off to 2 decimal places. Most likely your raw_len is something like 0.00xxx or 0.0yyy or even 0.000zzz
remove the eval before timechart and well check the raw_len_mb field values under interesting fields

View solution in original post

SplunkTrust
SplunkTrust

well its not a bug, all values are zero thats because you are rounding it up to 2, look for raw_len field values and you should see number which is very small (few hundred bytes) and then you are converting it to MB which will be very very small.
If this problem persist, could you please let us know the output of raw_len or you could try increasing the rounding it up to 5,6 maybe?

also try normalizing it :

index="myindex"
 | rex field=source "(?P<logType>[^\\\]+)$" 
 | eval raw_len=len(_raw) 
 | eval raw_len_mb = round(raw_len/1024/1024,5 )
 | timechart span=1d useother=false sum(raw_len_mb) as MB by logType limit=0 

let me know if this helps!

Champion

hi @reverse
you are calculating the raw length in gb for 1 event and then rounding it off to 2 decimal places. Most likely your raw_len is something like 0.00xxx or 0.0yyy or even 0.000zzz
remove the eval before timechart and well check the raw_len_mb field values under interesting fields

View solution in original post

Contributor

good catch ..let me try ..

0 Karma

Contributor

@Sukisen1981 it worked !.. thank you

0 Karma

Champion

glad it worked I am converting my comment to an answer, please accept it as it significantly helped resolve your issue.

0 Karma

Contributor

but now how do i change it to MB for all columns.. ?

| timechart span=1d useother=false sum(raw_len) as KBby logType limit=0
| eval MB = round(KB/1024/1024,2)

0 Karma

Champion

when you use len of anything it does not show you the length in bytes kb/mb at all....its merely showing you the string / char length of your _raw event..what exactly are you trying to do here?

0 Karma

Contributor

getting log file sizes to monitor log growth .. and spikes

0 Karma

Champion

you are getting what I am saying right? the moment you use len(x)...it works like a string length function...

0 Karma

Contributor

thats fine .. i know it wont be 100% match with log file size .. even 95% match would do...

0 Karma

Champion

hi @reverse your timechart is correct , and it could be a bug but it is more likely to be an issue with the logType.
You say - 'If I add |eval logType ="thatlogtype*"' it works.
Which means that logType is not being extracted / identified default. Could you please post a sample of your logs and query before the timechart part?

0 Karma

Contributor

logType is getting extracted just fine .. so no issues there .. it is just all values are ZERO.

0 Karma

Contributor
index="myindex"
| rex field=source "(?P<logType>[^\\\]+)$" 
| eval raw_len=len(_raw) 
| eval raw_len_mb = raw_len/1024/1024 
| eval raw_len_mb = round(raw_len_mb,2) 
| timechart span=1d useother=false sum(raw_len_mb) as MB by logType limit=0 
0 Karma
State of Splunk Careers

Access the Splunk Careers Report to see real data that shows how Splunk mastery increases your value and job satisfaction.

Find out what your skills are worth!