All Apps and Add-ons

Why am I getting errors with JMS Messaging Modular Input after adding 36 inputs?

jedatt01
Builder

I am using the JMS Input with Websphere MQ and have successfully been able to monitor queues and pull the data into splunk. However, my Websphere admin has told me that the JMS app is not pulling messages out of the queues fast enough and it's causing the queues to back up. The Websphere admin suggested I add multiple inputs for each of the queues and I ended up with a total of 36 JMS inputs. After adding the inputs and restarting Splunk, the jms input is no longer working. I can also see that the corresponding java process fails. Splunkd shows the following errors. Please help!

8/19/14 
12:00:14.568 PM 
08-19-2014 12:00:14.568 -0400 INFO  ExecProcessor - New scheduled exec process: python F:\Splunk\etc\apps\jms_ta\bin\jms.py
host = LOUWEBWPL20S01 source = F:\Splunk\var\log\splunk\splunkd.log sourcetype = splunkd
8/19/14 
12:00:15.582 PM 
08-19-2014 12:00:15.582 -0400 ERROR ExecProcessor - message from "python F:\Splunk\etc\apps\jms_ta\bin\jms.py" Traceback (most recent call last):
host = LOUWEBWPL20S01 source = F:\Splunk\var\log\splunk\splunkd.log sourcetype = splunkd
8/19/14 
12:00:15.582 PM 
08-19-2014 12:00:15.582 -0400 ERROR ExecProcessor - message from "python F:\Splunk\etc\apps\jms_ta\bin\jms.py"   File "F:\Splunk\etc\apps\jms_ta\bin\jms.py", line 129, in <module>
host = LOUWEBWPL20S01 source = F:\Splunk\var\log\splunk\splunkd.log sourcetype = splunkd
8/19/14 
12:00:15.582 PM 
08-19-2014 12:00:15.582 -0400 ERROR ExecProcessor - message from "python F:\Splunk\etc\apps\jms_ta\bin\jms.py"     do_run()
host = LOUWEBWPL20S01 source = F:\Splunk\var\log\splunk\splunkd.log sourcetype = splunkd
8/19/14 
12:00:15.582 PM 
08-19-2014 12:00:15.582 -0400 ERROR ExecProcessor - message from "python F:\Splunk\etc\apps\jms_ta\bin\jms.py"   File "F:\Splunk\etc\apps\jms_ta\bin\jms.py", line 48, in do_run
host = LOUWEBWPL20S01 source = F:\Splunk\var\log\splunk\splunkd.log sourcetype = splunkd
8/19/14 
12:00:15.582 PM 
08-19-2014 12:00:15.582 -0400 ERROR ExecProcessor - message from "python F:\Splunk\etc\apps\jms_ta\bin\jms.py"     run_java()
host = LOUWEBWPL20S01 source = F:\Splunk\var\log\splunk\splunkd.log sourcetype = splunkd
8/19/14 
12:00:15.582 PM 
08-19-2014 12:00:15.582 -0400 ERROR ExecProcessor - message from "python F:\Splunk\etc\apps\jms_ta\bin\jms.py"   File "F:\Splunk\etc\apps\jms_ta\bin\jms.py", line 99, in run_java
host = LOUWEBWPL20S01 source = F:\Splunk\var\log\splunk\splunkd.log sourcetype = splunkd
8/19/14 
12:00:15.582 PM 
08-19-2014 12:00:15.582 -0400 ERROR ExecProcessor - message from "python F:\Splunk\etc\apps\jms_ta\bin\jms.py"     process = Popen(java_args)
host = LOUWEBWPL20S01 source = F:\Splunk\var\log\splunk\splunkd.log sourcetype = splunkd
8/19/14 
12:00:15.582 PM 
08-19-2014 12:00:15.582 -0400 ERROR ExecProcessor - message from "python F:\Splunk\etc\apps\jms_ta\bin\jms.py"   File "F:\Splunk\Python-2.7\Lib\subprocess.py", line 711, in __init__
host = LOUWEBWPL20S01 source = F:\Splunk\var\log\splunk\splunkd.log sourcetype = splunkd
8/19/14 
12:00:15.582 PM 
08-19-2014 12:00:15.582 -0400 ERROR ExecProcessor - message from "python F:\Splunk\etc\apps\jms_ta\bin\jms.py"     errread, errwrite)
host = LOUWEBWPL20S01 source = F:\Splunk\var\log\splunk\splunkd.log sourcetype = splunkd
8/19/14 
12:00:15.582 PM 
08-19-2014 12:00:15.582 -0400 ERROR ExecProcessor - message from "python F:\Splunk\etc\apps\jms_ta\bin\jms.py"   File "F:\Splunk\Python-2.7\Lib\subprocess.py", line 948, in _execute_child
host = LOUWEBWPL20S01 source = F:\Splunk\var\log\splunk\splunkd.log sourcetype = splunkd
8/19/14 
12:00:15.582 PM 
08-19-2014 12:00:15.582 -0400 ERROR ExecProcessor - message from "python F:\Splunk\etc\apps\jms_ta\bin\jms.py"     startupinfo)
host = LOUWEBWPL20S01 source = F:\Splunk\var\log\splunk\splunkd.log sourcetype = splunkd
8/19/14 
12:00:15.582 PM 
08-19-2014 12:00:15.582 -0400 ERROR ExecProcessor - message from "python F:\Splunk\etc\apps\jms_ta\bin\jms.py" WindowsError: [Error 87] The parameter is incorrect
1 Solution

Damien_Dallimor
Ultra Champion

In part , your WAS admin is correct , but you have to think horizontally rather than vertically.

Adding more stanzas into a single JMS Modular Input instance is not how to achieve scalability.

You can only achieve so much scale by stacking compute resource vertically(36 stanzas in 1 single JVM).

Aside from the JVM instance that the JMS stanzas execute in , there are also other bottlenecks to be cogniscent of that effect scale , such as the OS stdin/stdout buffer and the Splunk indexing pipeline.

So , in order to scale you have to think “horizontally” and leverage Splunk architectural approaches to accomplish the scale you require.

By this I mean deploying multiple JMS Mod Inputs across multiple Universal Forwarders that collect the messages from the Websphere queue(s) and forward the data into a Splunk Indexer Cluster.

So for example , in the below deployment , you should in theory increase scale 2x :

Spunk UF instance 1 : JMS Mod input installed , 1 GB heap size , pointing at WAS MQ queue “A” , forwarding into indexer cluster

Splunk UF instance 2 : JMS Mod input installed , 1 GB heap size , also pointing at WAS MQ queue “A” , forwarding into indexer cluster

View solution in original post

Damien_Dallimor
Ultra Champion

In part , your WAS admin is correct , but you have to think horizontally rather than vertically.

Adding more stanzas into a single JMS Modular Input instance is not how to achieve scalability.

You can only achieve so much scale by stacking compute resource vertically(36 stanzas in 1 single JVM).

Aside from the JVM instance that the JMS stanzas execute in , there are also other bottlenecks to be cogniscent of that effect scale , such as the OS stdin/stdout buffer and the Splunk indexing pipeline.

So , in order to scale you have to think “horizontally” and leverage Splunk architectural approaches to accomplish the scale you require.

By this I mean deploying multiple JMS Mod Inputs across multiple Universal Forwarders that collect the messages from the Websphere queue(s) and forward the data into a Splunk Indexer Cluster.

So for example , in the below deployment , you should in theory increase scale 2x :

Spunk UF instance 1 : JMS Mod input installed , 1 GB heap size , pointing at WAS MQ queue “A” , forwarding into indexer cluster

Splunk UF instance 2 : JMS Mod input installed , 1 GB heap size , also pointing at WAS MQ queue “A” , forwarding into indexer cluster

jedatt01
Builder

Thanks Damien! This is exactly what I was looking for

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Index This | What travels the world but is also stuck in place?

April 2026 Edition  Hayyy Splunk Education Enthusiasts and the Eternally Curious!   We’re back with this ...

Discover New Use Cases: Unlock Greater Value from Your Existing Splunk Data

Realizing the full potential of your Splunk investment requires more than just understanding current usage; it ...

Continue Your Journey: Join Session 2 of the Data Management and Federation Bootcamp ...

As data volumes continue to grow and environments become more distributed, managing and optimizing data ...