Splunk Search

How to fix wandering trellis order?

yuanliu
SplunkTrust
SplunkTrust

Sometimes, running the same search generates different orders when trellis visualization is used.  For example,

 

((sourcetype=A field1=*) OR (sourcetype=B user=* field2=*)) clientip=*
 earliest="01/24/2023:02:00:00" latest="01/24/2023:08:00:00"
| fields clientip user field1 field2
| eval user = mvindex(split(user, ":"), 1)
| eventstats values(user) as user by clientip
| eval clientip = clientip . if(isnull(user), "/", "/" . mvjoin(user, ","))
| timechart span=5m limit=19 count(field1) as s1 count(field2) as s2 by clientip

 

Here, field1 only exist in sourcetype A, user and field2 only exist in sourcetype B; search period is fixed in the past.  This means that search result cannot change.  But the following are two screenshots of two consecutive executions.

wandering-trellis1.png

wandering-trellis2.png

They show the same number of trellis with exact same clientip titles; each clientip's graph is also the same across the two runs.  But obviously the order is rearranged. (In Statistics view, columns are arranged in lexicographic order of "s1:clientip" and "s2:clientip".)

Is there some way to be certain of the order?

Labels (3)
Tags (1)
0 Karma

tscroggins
Influencer

Hi @yuanliu,

I would normally use a transpose-sort-transpose pattern for custom column sorting; however, trellis needs the field metadata (name, data_source, splitby_field, and splitby_value) provided by chart, timechart, and xyseries. To force trellis to sort columns without modifying their appearance, we can exploit Split By display behavior. Trellis trims whitespace from field names at display time, so we can untable the timechart result, sort events as needed, pad the aggregation field value with a number of spaces equal to the sort position, and use xyseries to maintain the metadata trellis requires.

| untable _time aggregation value
| rex field=aggregation "(?<field>[^:]+): (?<clientip>.+)"
| sort 0 - field clientip
| streamstats count
``` 40 padding spaces in this example ```
| eval aggregation=substr("                                        ", 0, count).aggregation
| rename aggregation as clientip
| xyseries _time clientip value

 

0 Karma

tscroggins
Influencer

That got me thinking about generating padded strings of arbitrary length. These use a hash mark for clarity:

 

| makeresults 
| eval length=100
| eval pad=substr(replace(tostring(round(0, length)), ".", "#"), 0, length)

| makeresults 
| eval length=100
| eval pad=mvjoin(mvmap(mvrange(0, length, 1), "#"), "")

 

I'd love the convenience of an eval function similar to the classic BASIC SPACE$ and STRING$ functions or something more powerful like a regular expression-based string generator.

0 Karma
Get Updates on the Splunk Community!

Index This | Divide 100 by half. What do you get?

November 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...

Stay Connected: Your Guide to December Tech Talks, Office Hours, and Webinars!

❄️ Celebrate the season with our December lineup of Community Office Hours, Tech Talks, and Webinars! ...

Splunk and Fraud

Watch Now!Watch an insightful webinar where we delve into the innovative approaches to solving fraud using the ...