Splunk Search

pick the last value in an array?

mitag
Contributor

(Apologies in advance since I am not even sure what question to ask and how to ask it. I'll rewrite it once I get a better idea of how to ask it.)

Grouping events via transaction correctly produces multiple results for some fields:

alt text

The problem is that certain standard functions such as color formatting (e.g. make "failed" cells red) and post-transaction filtering (e.g. search status!=success) on that field no longer work.

How do I remove the "started" value from the values in the status field? Or perhaps, how do I evaluate a new field such as last_status that is equal to the status value in the last event in the group?

(I've looked at the related questions and Splunk docs and the solutions - mostly using mvexpand and similar commands, and couldn 't figure out how to extract single values out of what appears to be an array of them.

The search:

sourcetype="linux_messages_syslog" uuid="*" 
| transaction uuid
| table _time duration eventcount status

P.S. Please assume transaction is a must and I cannot use stats instead of it.

Thank you!

0 Karma
1 Solution

to4kawa
Ultra Champion
....
| streamstats window=1 last(status) as last_status
| eval status=if(last_status="started",mvindex(status,0,mvcount(status)-2),status)

P.S. Please assume transaction is a must OK, I see. but,

sourcetype="linux_messages_syslog" uuid="*"
| reverse
| stats min(_time) as _time count as eventcount range(_time) as duration list(status) as status by uuid
| table _time duration eventcount status
| streamstats window=1 last(status) as last_status
| eval status=if(last_status="started",mvindex(status,0,mvcount(status)-2),status)

View solution in original post

to4kawa
Ultra Champion
....
| streamstats window=1 last(status) as last_status
| eval status=if(last_status="started",mvindex(status,0,mvcount(status)-2),status)

P.S. Please assume transaction is a must OK, I see. but,

sourcetype="linux_messages_syslog" uuid="*"
| reverse
| stats min(_time) as _time count as eventcount range(_time) as duration list(status) as status by uuid
| table _time duration eventcount status
| streamstats window=1 last(status) as last_status
| eval status=if(last_status="started",mvindex(status,0,mvcount(status)-2),status)

mitag
Contributor

Thank you! Your stats version is about twice as fast as the "transaction" one:

Transaction:

This search has completed and has returned 96 results by scanning 7,295 events in 0.936 seconds

Stats:

This search has completed and has returned 96 results by scanning 7,295 events in 0.487 seconds

Would you have an idea how to remove "started" from the table altogether?

0 Karma

to4kawa
Ultra Champion
sourcetype="linux_messages_syslog" uuid="*" 
| reverse
| stats min(_time) as _time count as eventcount range(_time) as duration list(status) as status by uuid
| table _time duration eventcount status
| rex field=status mode=sed "s/started//g"

If you do not care status order, change from list() to values() .

0 Karma

mitag
Contributor

The short answer is that mvindex (status, -1) seems to provide the last value in a field (status in this case) that contains an array of values.

To expand:

Looks like this modified search gives me the last value in a field containing an array of them:

sourcetype="linux_messages_syslog" uuid="*" 
| transaction uuid
| eval status_last = mvindex (status, -1)
| table _time duration eventcount status_last

... and I have yet to figure out how to remove "started" from the array altogether. Something like this in Python:

list(filter(lambda a: a != "started", status))
0 Karma
Get Updates on the Splunk Community!

Splunk Observability Cloud’s AI Assistant in Action Series: Analyzing and ...

This is the second post in our Splunk Observability Cloud’s AI Assistant in Action series, in which we look at ...

Elevate Your Organization with Splunk’s Next Platform Evolution

 Thursday, July 10, 2025  |  11AM PDT / 2PM EDT Whether you're managing complex deployments or looking to ...

Splunk Answers Content Calendar, June Edition

Get ready for this week’s post dedicated to Splunk Dashboards! We're celebrating the power of community by ...