All Apps and Add-ons

Is JMS Messaging Modular Input supported in a search head clustering environment?

millern4
Communicator

Hello,

We have some internal customers that are interested in bringing in JMS data that contains XML within the message when I came across this Input.

My question is:

  1. Is this TA supported in a SHC environment?
  2. Or is the preferred method to install this on a heavy forwarder, do the parsing there, and then bring in the data to our Index cluster that way?

Also, we were looking at a data sample and trying to use | xmlkv, but not seeing all the fields extracted. Does your TA rely on autoKV and are there any preferred methods on ensuring the xml gets parsed correctly?

0 Karma
1 Solution

Damien_Dallimor
Ultra Champion

The JMS Modular Input should be installed across 1 or more Forwarders (Heavy or Universal).
In this deployment scenario you would edit the inputs.conf file directly to setup your JMS stanzas( when you use a search head there is a setup UI via Splunk Web that does this for you)

How much you scale horizontally will depend upon how much throughput (msgs and data volume) you are trying to accommodate.

If you post an example of the XML you are receiving from your message queues , I can then advise you of the best approach for handling this data (whether you use props.conf/transforms.conf , or you apply a custom message handler in the JMS Mod Input to preprocess the received XML before indexing it in Splunk)

View solution in original post

0 Karma

Damien_Dallimor
Ultra Champion

So to accomplish your xml kv extraction at index time , you may have to ensure that the JMS Mod Input outputs only the raw XML you receive off the message queue , and not all the other messaging meta data such as JMS headers etc...

So you can plugin your own custom message handler that does this.

Now you need to write some java code and compile it. It you get stuck , email me ddallimore@splunk.com

1) write a custom handler to output only the xml message , here is some code
2) compile this , add the class file to a jar , and place the jar in SPLUNK_HOME/etc/apps/jms_ta/bin/lib
3) apply this handler via your JMS config.

alt text

4) setup your props.conf to apply xml kv to your sourcetype , kv_mode=xml

I just realized I also answered this question here.

0 Karma

millern4
Communicator

Thanks Damien, these are both great options that we can discuss with the customer when we meet with them. I appreciate your responses and willingness to help.

0 Karma

Damien_Dallimor
Ultra Champion

The JMS Modular Input should be installed across 1 or more Forwarders (Heavy or Universal).
In this deployment scenario you would edit the inputs.conf file directly to setup your JMS stanzas( when you use a search head there is a setup UI via Splunk Web that does this for you)

How much you scale horizontally will depend upon how much throughput (msgs and data volume) you are trying to accommodate.

If you post an example of the XML you are receiving from your message queues , I can then advise you of the best approach for handling this data (whether you use props.conf/transforms.conf , or you apply a custom message handler in the JMS Mod Input to preprocess the received XML before indexing it in Splunk)

0 Karma

millern4
Communicator

Thanks for the response, here's a sample of data we are working with here from the customer.

Fri Jun 26 08:44:49 EDT 2015 name="QUEUE_msg_received" event_id="ID:414d5120494239514d4752202020202054cb7a55210925a6" msg_dest="SplunkEvents" msg_header_timestamp="1435322689140" msg_header_correlation_id="ID:414d5120494239514d47522020202020acb071552014b4c3" msg_header_delivery_mode="1" msg_header_expiration="0" msg_header_priority="0" msg_header_redelivered="false" msg_header_type="null" msg_body="
<wmb:event xmlns:wmb="http://www.ibm.com/xmlns/prod/websphere/messagebroker/6.1.0/monitoring/event"><wmb:eventPointData><wmb:eventData wmb:productVersion="9002" wmb:eventSchemaVersion="6.1.0.3" wmb:eventSourceAddress="TRANSFORM.OUT.transaction.End"><wmb:eventIdentity wmb:eventName="Message sent to the destination"/><wmb:eventSequence wmb:creationTime="2015-06-26T12:44:49.134998Z" wmb:counter="6"/><wmb:eventCorrelation wmb:localTransactionId="21e6b8df-2eeb-44d2-b9d9-28be22f01693-4" wmb:parentTransactionId="" wmb:globalTransactionId=""/></wmb:eventData><wmb:messageFlowData><wmb:broker wmb:name="IB9NODE" wmb:UUID="d9ce2c80-3b77-4843-931a-5015bdb3a8fc"/><wmb:executionGroup wmb:name="default" wmb:UUID="b6268764-4701-0000-0080-fad60bfb142b"/><wmb:messageFlow wmb:uniqueFlowName="IB9NODE.default.HL7DFDLOutput" wmb:name="HL7DFDLOutput" wmb:UUID="f4d8222b-4e01-0000-0080-fa8d2838f488" wmb:threadId="12648"/><wmb:node wmb:nodeLabel="TRANSFORM.OUT" wmb:nodeType="ComIbmMQInputNode" wmb:detail="TRANSFORM.OUT"/></wmb:messageFlowData></wmb:eventPointData><wmb:applicationData xmlns=""><wmb:simpleContent wmb:name="messageID" wmb:value="2013050906561700059000002|ADT|A08|1435322689088|MEDIPAC|1" wmb:dataType="string"/></wmb:applicationData></wmb:event>"

This the MQ header and the XML Event as the message body.

Thank you in advance for any advice you can offer on how to properly parse and work with this data.

0 Karma

Damien_Dallimor
Ultra Champion

The JMS Mod Input knows nothing about data formats. It can receive anything. JSON , XML , Plain Text , Chapter 1 of Moby Dick , it doesn't care. You can write custom handlers for the JMS Mod Input to preprocess raw received data and customise what is written to Splunk if you need to. But that is a tad more advanced.

What exactly are you trying to extract from the XML ? Are you just trying to do key/value extraction based on the XML entitys and attributes ?

Also perhaps post your current search command so folks here can troubleshoot it.

0 Karma

millern4
Communicator

Honestly, we aren't trying to do much at this point in time other than just key/value pair extraction based on the xml entities and attributes.

So for example:

index = data_ingested | xmlkv

where I would expect to see something like wmb:localTransactionId="21e6b8df-2eeb-44d2-b9d9-28be22f01693-4"

And ultimately have these show up as fields during index time rather than have our users have to pipe their search results to |xmlkv every time.

So I would think we'd want the TA to handle the key-value extraction during index time if that's a possibility?

0 Karma

alacercogitatus
SplunkTrust
SplunkTrust

With TAs that consume data like this, SHC doesn't lend itself to doing it. If you do, you may end up with n copies of data, where n is the number of search heads in your cluster. So.... yes a Heavy Forwarder that performs "single searchhead" data collection would be the best approach.

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...