Splunk Search

How can I stack the rows vertically while preserving the original columns

rar0
Loves-to-Learn Lots

I have a search that returns the following table (after transpose):

columnrow 1row 2
search_nameUC-315UC-231
ID7zAt/75Dfxdf
Time13:27:1709:17:09


And I need it to look like this:

columnnew_row
search_nameUC-315
ID7zAt/7
Time13:27:17
search_nameUC-231
ID5Dfxdf
Time09:17:09


This should work independently of the amount of rows.
I've tried using mvexpand, and streamstats but without any luck.

Labels (1)
0 Karma

rar0
Loves-to-Learn Lots

Thanks for your fast responses.

Some more context: For incidence response, we want to generate a text block with all notables below each other where a security analyst can write his conclusion below every notable, like this:

* Short-lived group membership
ID:                 F3y4IS
Time:               05/14/2024 11:12:28
Destination:        xyz
Source User:        abc
User / Group:       def
Destination Group:  ghi
--> 

* Usage of Default Account "administrator"
ID:                 L32op/, WTBxMy
Date:               05/17/2024 20:39:04
Destination:        xyz
Source User:        abc
User / Group:       def
Destination Group:  ghi
--> 

* Malware detected ...

For this, I already wrote the following query to get the current notables to analyse:

`notable`
| search urgency IN ("high", "critical") status_label IN ("Unassigned", "New", "In Progress") NOT `suppression`
| lookup notable_xref_lookup event_id OUTPUT xref_id AS ID
| table search_name ID _time `text_block_fields_default`

 If I transpose this, I get the first table I posted. In the macro `text_block_fields_default` are interesting fields to include in the report like "action, app, dest, src, etc". So the solution shouldn't be dependent on column names.

I already have this query which generates the text block for 1 specific notable:

`notable`
| search event_id="BAAAD325-8391-4075-81A2-AB145A1FA2FB@@notable@@80497c055c45b92d73bd74e700c1b6f9"
| lookup notable_xref_lookup event_id OUTPUT xref_id AS ID
| table search_name ID _time `text_block_fields_default`
| `ctime(_time)`
| rename _time as time
| transpose include_empty=False column_name=text_block
| eval row 1=mvjoin('row 1',",")
| eval text_block=if(text_block="search_name","* ",text_block.":                      "),text_block=replace(text_block,"(.{21}).*","\1") 
| eval text_block=text_block . 'row 1'
| append [| makeresults | eval text_block="--> " ]
| table text_block

 Thanks in advance for your help!

0 Karma

yuanliu
SplunkTrust
SplunkTrust

This is an interesting use case.  Here is an idea based on @ITWhisperer's prototype.

 

`notable`
| search urgency IN ("high", "critical") status_label IN ("Unassigned", "New", "In Progress") NOT `suppression`
| lookup notable_xref_lookup event_id OUTPUT xref_id AS ID
| foreach search_name ID _time `text_block_fields_default`
    [eval row = mvappend(row, json_object("name", "<<FIELD>>", "value", <<FIELD>>))]
| table row
| mvexpand row
| eval column = json_extract(row, "name"), new_row = json_extract(row, "value")
| fields - row

 

(JSON functions requires 8.1 or higher.  The same can be achieved using the traditional string concatenation method.) Using this mock data

search_nameID_timebarfoo
UC-3157zAt/713:27:17bar1foo1
UC-2315Dfxdf13:27:17bar2foo2

output will be: (bar, foo, of course, represents `text_block_fields_default`)

columnnew_row
search_nameUC-315
ID7zAt/7
_time13:27:17
barbar1
foofoo1
search_nameUC-231
ID5Dfxdf
_time13:27:17
barbar2
foofoo2

Here is the full emulation

 

| makeresults format=csv data="search_name, 	ID, _time,	foo, bar
UC-315,	7zAt/7, 13:27:17,	foo1, bar1
UC-231,	 5Dfxdf, 13:27:17,	foo2, bar2"
| table search_name ID _time *
``` the above emulates
`notable`
| search urgency IN ("high", "critical") status_label IN ("Unassigned", "New", "In Progress") NOT `suppression`
| lookup notable_xref_lookup event_id OUTPUT xref_id AS ID
```
| foreach search_name ID _time * ``` * represents `text_block_fields_default` ```
    [eval row = mvappend(row, json_object("name", "<<FIELD>>", "value", <<FIELD>>))]
| table row
| mvexpand row
| eval column = json_extract(row, "name"), new_row = json_extract(row, "value")
| fields - row

 

 

Tags (1)
0 Karma

yuanliu
SplunkTrust
SplunkTrust

As always, the answer depends a lot on data characteristics and the real problem you are trying to solve.  Maybe you can explain why the second look, which is highly unconventional, is more desirable?  Is it safe to say that search_name, ID, and Time are a triplet that should be treated as a unit?  In that case, wouldn't this form be more human friendly?

Timesearch_nameID
13:27:17UC-3157zAt/7
13:27:17UC-231 5Dfxdf

(This, of course is the default time series aka Splunky presentation.)

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

Assuming you can still be dependent on column names, you should go back to pre-transpose and add the following

| eval row=mvrange(0,3)
| mvexpand row
| eval column=mvindex(split("search_name,ID,Time",","),row)
| eval new_row=case(row=0,search_name,row=1,ID,row=2,Time)
| table column new_row
0 Karma
Get Updates on the Splunk Community!

Splunk Smartness with Brandon Sternfield | Episode 3

Hello and welcome to another episode of "Splunk Smartness," the interview series where we explore the power of ...

Monitoring Postgres with OpenTelemetry

Behind every business-critical application, you’ll find databases. These behind-the-scenes stores power ...

Mastering Synthetic Browser Testing: Pro Tips to Keep Your Web App Running Smoothly

To start, if you're new to synthetic monitoring, I recommend exploring this synthetic monitoring overview. In ...