Many thanks for the info. Interesting tool. We have different needs to cover joblogs(job/Step termination within batch jobs), syslog,RACF, data from user exits ... data from any file etc. to b sent to any kind of syslog servers. In many cases using sort exits is the smartest way for us, as a user can filter the data as needed.
A clever and free way is using a SORT EXIT calling EZASMI to send anything to any syslog server.
Use it for console/syslog messages, RACF IRRADU00/IRRDBU00. The sort has the advantage that you can filter on anything.
Why paying big money to the big corporations for simple programs?
Easy to send jobs step information to Splunk etc.
<1> JCT scanned : MOBI USERID: IBMUSER READER: 2018-07-10 07:23:56:22 JOBSTART: 07:25:17:83 SMFID: APEX
PAL$TCP4-10 JOBNAME:MAXP001C JOB04459 STEP:COMPLINK ASM PGM:ASMA90 CODE: 0004
PAL$TCP4-10 JOBNAME:MAXP001C JOB04459 STEP:COMPLINK LKED PGM:IEWL CODE: 0000
IBM Common Data Provider for z Systems (CDPz) is the best option for sending SMF records to Splunk.
CDPz can send a wide variety of data including 140 data sources and 100+ SMF record types. More specifically, CDPz can support the following:
• SMF records
• SYSLOG (IBM z/OS System Log and USS SyslogD)
• Application logs (IBM CICS Transaction Server logs and IBM WebSphere Application Server logs)
CDPz also has advanced filtering capabilities including RegEx and time filtering that can be set up using the built-in web configuration tool shown below.
More information on IBM Common Data Provider for z Systems can be found directly on Splunkbase.
The following Splunk Blog outlines how Splunk and IBM are partnering to help customers integrate IBM Z (Mainframe) Data and Insights into Splunk software:
What data are you trying to get into Splunk? Do you mean FTP from a mainframe?
I believe you would simply FTP the file to the Splunk indexer or even just a machine with a universal forwarder on it. Then you can input the data from the file into Splunk through the manual interface via the browser or you can monitor the files in Splunk to upload when changes occur to that file.
More doc in data input from files here:
If you need real time log data from the mainframe, as mentioned above, Ironstream is a good solution that will take care of data mapping and transformation and get that data over in real-time to Splunk:
Here I am using IBM zSecure that is already installed on Mainframe and exporting via FTP to Splunk. So with the .txt file should do the parsing for regex.
But Ironstream can do this more easily.
Ironstream from Syncsort can do all of this work for you. It will handle all of the issues related to z/OS SMF records. It deals with the compression, the triplets, the binary data and converts the data from EBCDIC to ASCII. It does this very efficiently, even offloading a lot of the work to a zIIP engine in order to keep the MSU cost of this work to an absolute minimum. This is all done in real time to give you the best data latency possible while not impacting the existing workload on your system.
If you have other data sources like SYSLOG, Log4j or flat files, Ironstream can handle those as well.
Yes it can. But rather, the question is; can you get your mainframe to dump the SMF records in a readable format (i.e. not EBCDIC) and transport it to a place where the file can be indexed by Splunk?
As you know, there is no forwarder for the mainframe platform (read z/OS), and 'syslog' is not necessarily part of the mainframe toolkit.
I have seen this done with the help of some JCL-code to dump relevant SMF records as XML (yes, I know that it's huge) and transport it via (S)FTP on a regular basis to a place where Splunk reads it as a file. I believe that the conversion from EBCDIC to ASCII was performed by the mainframe FTP utility (in a fairly automated manner).
Google your way to find sample code for dumping SMF records (offered on some IBM web sites), show it to your mainframe people and ask them to adapt it to their environment.
Best of luck,
No. There is no pretrained sourcetype, if that is what you mean. But getting the output as XML will vastly simplify the parsing of the (variable length) SMF records, since the XML tags are created by the mainframe, and are thus done so correctly.
Then you will have to find out how the various XML-tagged fields will map to 'Failed Login' or 'Access Granted' etc.
Thanks Kristian. There's no problem getting the data to the server. But, does Splunk already know how SMF records are formatted or does something have to be done manually to index the data? I did not see SMF listed as a data source type.