I have a query that extracts useful info from a storage system report.
rex "quota list --verbose (?<fs>[A-Z0-9_]+) " | rex max_match=1000 "ViVol: (?<vivol>(?!user)[A-Za-z0-9]+)\nUsage\s+:\s+(?<usage>[0-9. A-Za-z]+)\n\s*Limit\s+:\s+(?<limit>[0-9A-Z. ]+)" | table fs, vivol, usage, limit
There is a single line at the start of the report with the filesystem which I extract as the "fs" field. Then there are several volume descriptions containing separate lines for the volume, usage and limit.
This query produces a single-value field for "fs" then three multi-value fields "vivol", "usage" and "limit". e.g.
fs vivol usage limit FIRST_FS VOL_ABC 100 300 VOL_XYZ 320 800 VOL_123 50 150
When I export this to Excel (using CSV) the multi-value fields are all within a single cell. I want them on separate rows. If I use mvexpand I get the unexpected behaviour that it will properly expand one field but leave the others unexpanded. If I expand all three fields they lose correlation so I get rows that are mixed-up.
FIRST_FS VOL_123 320 300
How do I turn my three multi-value fields into tuples? I want to keep them together so the first row in "vivol" matches the first rows in "usage" and "limit". Bear in mind there are many "fs" events (about 100 of them).
Here's an example of the storage system report I'm starting with, if that helps.
Content-Transfer-Encoding: 7bit Scheduled job console-context --evs 1 quota list --verbose FIRST_FS produced the following output: Type : Explicit Target : ViVol: VOL_ABC Usage : 100 GB Limit : 500 GB (Hard) Last modified : 2010-12-19 04:56:50.834383000+00:00 Type : Explicit Target : ViVol: VOL_123 Usage : 609 GB Limit : 3 TB (Hard) Last modified : 2010-12-21 04:04:23.757073000+00:00
I just had the same issue.
Create a single field with all the eventual fields you want, so you have a single MV, then use mvexpand to create the multiple entries, then do another parse on the (now single-) value to extract the three fields.
I ran into the same issue with two multi-valued fields, and arrived at a different solution - make a copy of the field to preserve the order for an mvfind, then use mvexpand, look up the value in the added field, lookup each field that was NOT expanded, then drop the added field. It would look something like:
...| eval vivolIndex=vivol | mvexpand vivol | eval idx=mvfind(vivolIndex,vivol) | eval usage=mvindex(usage,idx) | eval limit=mvindex(limit,idx) | fields - vivolIndex ...
This solution worked better for me as I was using a stats list(x) list(y) and needed to keep the values correlated.
Use mvzip, makemv and then reset the fields based on index.
First, mvzip the multi-values into a new field:
| eval reading=mvzip(vivol, usage) // create multi-value field for reading | eval reading=mvzip(reading, limit) // add the third field
At this point you'll have a multi-value field called reading. Here's an example of a field value (a list of four items):
"VOL_ABC,100,300", "VOL_XYZ,320,800", "VOL_123, 50,150", "VOL_FOO, 80,120"
Expand the field and restore the values:
| mvexpand reading // separate multi-value into into separate events | makemv reading delim="," // convert the reading into a multi-value | eval vivol=mvindex(reading, 0) // set vivol to the first value of reading | eval usage=mvindex(reading, 1) // set usage to the second value of reading | eval limit=mvindex(reading, -1) // set limit to the last value of reading
Very helpful, thanks. I ended up with a completed search that did exactly what I wanted using the above stuff.
source="/Znfs200g/Mainframe/splunk/volSpaceReport.txt" | rex maxmatch=0 "(?:PRIVATE\s+)(?\d+)\s+(?\d+)" | eval myzip=mvzip(vol,volpct) | mvexpand myzip | makemv myzip delim="," | eval vol=mvindex(myzip,0) | eval volpct=mvindex(myzip,1) | eventstats sum(vol) as volsum | eval weightedvolpct=(volpct*vol/volsum) | stats sum(weightedvolpct) as AverageHardDisk_Utilization
Thanks @sk314. To be fair, this question was left unanswered for four years and 35 hours. Some improvements have been made to the docs since this answer, but this example is still better, IMO.