Splunk Search

Has anyone created a Splunk search using IBM z/OS SMF type 70 records and calculated the CPU percentage?

New Member

We are trying to replicate some data that was in an RMF report and imported into Excel for a graph. We are trying to use the SMF type 70 records. Has anyone created a Splunk search using these records to calculate a physical processor percent?

0 Karma

New Member


This isn’t an answer as such (and also my first posting, so if I’m out of order and about to be shot to pieces, apologies in advance - I’m a relative newbie to Splunk but equally have 33 years z/os mainframe experience!) but I’m currently writing some HLASM code which converts key SMF record types / fields to JSON and pushes the data into Splunk using the TCP data input (using the comm server API).

I currently handle types 14,15,30,64, 70/72 (wip), 80 (wip) and 88, using a macro to simplify the conversion of SMF fields to JSON, regardless of data type (including FP).

Type 70s have similarly vexed me, partly because I’m a CICS sysprog and less familiar with this record, partly because I’m fairly new to Splunk. The challenge seems to be the way the partition data section maps to the logical processors , using an ‘offset’ and ‘number of’ LPs.

My stock code was outputting fields in repeating sections as multivalue fields (simply iterating through each section and outputting field=value), so each type 70 record is a single event.

To calculate data for each partition you’d then need to iterate through from ‘n’ for ‘m’ values of the mutlivalue LP fields, which I can’t immediately see that you can do (foreach is different?).

So I modified my code to emit each logical partition section, and each Logical Processor section as a separate event, with the notion of then being able to merge / join the events - the problem also requires a way to sum (or perform other arithmetic on) fields in the subset, possibly before joining. I’m yet to fathom out how to do that. I might need to resort to simply doing that logic in my code, such that each partition section contains the array (multivalue, logical partition sections) relating to it.

For the Splunkers, what we have is an array of ‘partitions’, and an array of ‘logical processors’, where each partition data section (p) contains and ‘offset’ (o) and ‘number of’ (n) logical processors assigned to that partition.

Assume the logical processors have a numeric field with value 0 thru m.

We need to join each logical partition section with the corresponding logical processor section, ‘o’ for ‘n’, performing evaluations (e.g sum) on the logical processor sections/events prior to joining with the partition section/events.

Possibly this is too abstract.

For my part, I have the choice as to whether to output each ‘section’ (both ‘partition’ and ‘logical processor’) as a multivalue field in the same event, or as entirely separate events.

If the latter, they can be joined by a common time stamp representing the time the source SMF record was written (typically @ 15, 30 or 60 min intervals).

Some pointers as to best organise the data based on my description of the source data might at least provide some useful pointers?

Thanks, Mark

0 Karma

New Member

Hi Mark,
actually got to index SMF 80 into Splunk, so trying to understand "how-to".
Can you please share/give your Macro to convert SMF80 or please let me know where I can look for (macro or Info).
Many Thanks and best regards,

0 Karma


If you can give a sample of the data in an RMF report, the community can probably help a lot. But probably only a few of us have seen one. (Obfuscate the data as needed, of course.) I looked at the IBM Knowledge Center documentation, but without examples it was just confusing.