Splunk Search

How to sort this multi-value fields based on the latest timestamp and status?

kelz
Explorer

Hi Splunkers,

I need help on how to sort this multi-value fields based on the latest timestamp and status.

Here's my dummy query for this.

| makeresults
| eval hostname = "server101"
| eval id = "123|124"
| eval database_timestamp = "Mar 03, 2022 12:59:46 PM|Feb 23, 2022 1:19:24 PM"
| eval database_status = "Online|Offline (30 days ago)"
| eval server_timestamp = "Feb 22, 2022 1:19:24 PM|Mar 01, 2022 12:59:46 PM"
| eval server_status = "Offline (31 days ago)|Online"
| fields hostname id database_timestamp database_status server_timestamp server_status
| makemv delim="|" database_timestamp
| makemv delim="|" database_status
| makemv delim="|" server_timestamp
| makemv delim="|" server_status
| makemv delim="|" id

Below is the sample output and expected output.

Current Output:        
         
hostname database_timestamp database_status server_timestamp server_timestamp
server101 Mar 03, 2022 12:59:46 PM
Feb 23, 2022 1:19:24 PM
Online
Offline (30 days ago)
Feb 22, 2022 1:19:24 PM
Mar 01, 2022 12:59:46 PM
Offline (31 days ago)
Online
         
Expected Output:        
         
hostname database_timestamp database_status server_timestamp server_status
server101 Mar 03, 2022 12:59:46 PM
Feb 23, 2022 1:19:24 PM
Online
Offline (30 days ago)
Mar 01, 2022 12:59:46 PM
Feb 22, 2022 1:19:24 PM
Online
Offline (31 days ago)

 

Labels (1)
Tags (3)
0 Karma
1 Solution

yuanliu
SplunkTrust
SplunkTrust

ITWhisperer expresses exactly how I see the problem.  It is so much cleaner (and easier) to fix the multivalue order in the search that produces values as opposed to tweak display after mixed up values.

This said, you can use a combination of mvmap, mvzip, mvsort, and split to get what you wanted.  I'll present two versions, one specific, one generic.

If you are certain that database_* is already correct, this slightly simpler filter helps to illustrate how the thought process goes:

 

| makeresults
| eval hostname = "server101"
| eval id = "123|124"
| eval database_timestamp = "Mar 03, 2022 12:59:46 PM|Feb 23, 2022 1:19:24 PM"
| eval database_status = "Online|Offline (30 days ago)"
| eval server_timestamp = "Feb 22, 2022 1:19:24 PM|Mar 01, 2022 12:59:46 PM"
| eval server_status = "Offline (31 days ago)|Online"
| fields hostname id database_timestamp database_status server_timestamp server_status
| makemv delim="|" database_timestamp
| makemv delim="|" database_status
| makemv delim="|" server_timestamp
| makemv delim="|" server_status
| makemv delim="|" id
``` below is multivalue manipulation ```
| eval server_lag=mvmap(server_timestamp, now()-strptime(server_timestamp, "%b %d, %Y %I:%M:%S %p")) ``` establish reverse time sequence ```
| eval server_status=mvzip(server_lag, mvzip(server_timestamp, server_status, "|"), "|") ``` zip affected fields with server_lag ```
| eval server_status=mvsort(server_status) ``` sort by reverse time using server_lag ```
| eval server_timestamp=mvmap(server_status, mvindex(split(server_status, "|"), 1)) ``` extract timestamp field from zip ```
| eval server_status=mvmap(server_status, mvindex(split(server_status, "|"), 2)) ``` extract status field from zip ```
| fields - server_lag

 

A more general approach is to use foreach to reorder all fields that may fall into disorder.

 

| makeresults
| eval hostname = "server101"
| eval id = "123|124"
| eval database_timestamp = "Mar 03, 2022 12:59:46 PM|Feb 23, 2022 1:19:24 PM"
| eval database_status = "Online|Offline (30 days ago)"
| eval server_timestamp = "Feb 22, 2022 1:19:24 PM|Mar 01, 2022 12:59:46 PM"
| eval server_status = "Offline (31 days ago)|Online"
| fields hostname id database_timestamp database_status server_timestamp server_status
| makemv delim="|" database_timestamp
| makemv delim="|" database_status
| makemv delim="|" server_timestamp
| makemv delim="|" server_status
| makemv delim="|" id

``` below is multivalue manipulation ```
| foreach *_status ``` iterate through all fields that may fall into disorder ```
    [eval <<MATCHSTR>>_lag=mvmap(<<MATCHSTR>>_timestamp, now()-strptime(<<MATCHSTR>>_timestamp, "%b %d, %Y %I:%M:%S %p")) ``` establish reverse time sequence ```
    | eval <<MATCHSTR>>_status=mvzip(<<MATCHSTR>>_lag, mvzip(<<MATCHSTR>>_timestamp, <<MATCHSTR>>_status, "|"), "|") ``` zip affected fields with server_lag ```
    | eval <<MATCHSTR>>_status=mvsort(<<MATCHSTR>>_status) ``` sort by reverse time using server_lag ```
    | eval <<MATCHSTR>>_timestamp=mvmap(<<MATCHSTR>>_status, mvindex(split(<<MATCHSTR>>_status, "|"), 1)) ``` retrieve timestamp field from zip ```
    | eval <<MATCHSTR>>_status=mvmap(<<MATCHSTR>>_status, mvindex(split(<<MATCHSTR>>_status, "|"), 2)) ``` retrieve status field from zip ```
    ]
| fields - *_lag

 

 

 

With sample data, the output is

hostname
id
database_timestamp
database_status
server_timestamp
server_status
_time
server101
123
124
Mar 03, 2022 12:59:46 PM
Feb 23, 2022 1:19:24 PM
Online
Offline (30 days ago)
Mar 01, 2022 12:59:46 PM
Feb 22, 2022 1:19:24 PM
Online
Offline (31 days ago)
2022-03-03 20:36:28

 

As you can see, the "solution" is bend-over manipulation.  Back to ITWhisperer's point: It is better to fix the data problem than to manipulate display.  For example, the above manipulations do not include "id" field.  There has to be some relationship between values of id and the rest.  Why is the first id smaller?

View solution in original post

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

Rather than sorting them after they have been added to multivalue fields, can you sort them beforehand?

0 Karma

yuanliu
SplunkTrust
SplunkTrust

ITWhisperer expresses exactly how I see the problem.  It is so much cleaner (and easier) to fix the multivalue order in the search that produces values as opposed to tweak display after mixed up values.

This said, you can use a combination of mvmap, mvzip, mvsort, and split to get what you wanted.  I'll present two versions, one specific, one generic.

If you are certain that database_* is already correct, this slightly simpler filter helps to illustrate how the thought process goes:

 

| makeresults
| eval hostname = "server101"
| eval id = "123|124"
| eval database_timestamp = "Mar 03, 2022 12:59:46 PM|Feb 23, 2022 1:19:24 PM"
| eval database_status = "Online|Offline (30 days ago)"
| eval server_timestamp = "Feb 22, 2022 1:19:24 PM|Mar 01, 2022 12:59:46 PM"
| eval server_status = "Offline (31 days ago)|Online"
| fields hostname id database_timestamp database_status server_timestamp server_status
| makemv delim="|" database_timestamp
| makemv delim="|" database_status
| makemv delim="|" server_timestamp
| makemv delim="|" server_status
| makemv delim="|" id
``` below is multivalue manipulation ```
| eval server_lag=mvmap(server_timestamp, now()-strptime(server_timestamp, "%b %d, %Y %I:%M:%S %p")) ``` establish reverse time sequence ```
| eval server_status=mvzip(server_lag, mvzip(server_timestamp, server_status, "|"), "|") ``` zip affected fields with server_lag ```
| eval server_status=mvsort(server_status) ``` sort by reverse time using server_lag ```
| eval server_timestamp=mvmap(server_status, mvindex(split(server_status, "|"), 1)) ``` extract timestamp field from zip ```
| eval server_status=mvmap(server_status, mvindex(split(server_status, "|"), 2)) ``` extract status field from zip ```
| fields - server_lag

 

A more general approach is to use foreach to reorder all fields that may fall into disorder.

 

| makeresults
| eval hostname = "server101"
| eval id = "123|124"
| eval database_timestamp = "Mar 03, 2022 12:59:46 PM|Feb 23, 2022 1:19:24 PM"
| eval database_status = "Online|Offline (30 days ago)"
| eval server_timestamp = "Feb 22, 2022 1:19:24 PM|Mar 01, 2022 12:59:46 PM"
| eval server_status = "Offline (31 days ago)|Online"
| fields hostname id database_timestamp database_status server_timestamp server_status
| makemv delim="|" database_timestamp
| makemv delim="|" database_status
| makemv delim="|" server_timestamp
| makemv delim="|" server_status
| makemv delim="|" id

``` below is multivalue manipulation ```
| foreach *_status ``` iterate through all fields that may fall into disorder ```
    [eval <<MATCHSTR>>_lag=mvmap(<<MATCHSTR>>_timestamp, now()-strptime(<<MATCHSTR>>_timestamp, "%b %d, %Y %I:%M:%S %p")) ``` establish reverse time sequence ```
    | eval <<MATCHSTR>>_status=mvzip(<<MATCHSTR>>_lag, mvzip(<<MATCHSTR>>_timestamp, <<MATCHSTR>>_status, "|"), "|") ``` zip affected fields with server_lag ```
    | eval <<MATCHSTR>>_status=mvsort(<<MATCHSTR>>_status) ``` sort by reverse time using server_lag ```
    | eval <<MATCHSTR>>_timestamp=mvmap(<<MATCHSTR>>_status, mvindex(split(<<MATCHSTR>>_status, "|"), 1)) ``` retrieve timestamp field from zip ```
    | eval <<MATCHSTR>>_status=mvmap(<<MATCHSTR>>_status, mvindex(split(<<MATCHSTR>>_status, "|"), 2)) ``` retrieve status field from zip ```
    ]
| fields - *_lag

 

 

 

With sample data, the output is

hostname
id
database_timestamp
database_status
server_timestamp
server_status
_time
server101
123
124
Mar 03, 2022 12:59:46 PM
Feb 23, 2022 1:19:24 PM
Online
Offline (30 days ago)
Mar 01, 2022 12:59:46 PM
Feb 22, 2022 1:19:24 PM
Online
Offline (31 days ago)
2022-03-03 20:36:28

 

As you can see, the "solution" is bend-over manipulation.  Back to ITWhisperer's point: It is better to fix the data problem than to manipulate display.  For example, the above manipulations do not include "id" field.  There has to be some relationship between values of id and the rest.  Why is the first id smaller?

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...