All Apps and Add-ons

JMS Messaging Modular Input: "OutOfMemoryError: Java heap space" error after larger messages are read from queue

bdahlb
Explorer

We receive an OutOfMemoryError: Java heap space error after larger messages are read from queue. This is followed by the individual queue no longer being read into Splunk. We have been able to reproduce this a few times by dropping a ~13MB message into Websphere MQ.

Debug logs follow:

11-18-2015 16:33:06.151 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py""  at com.splunk.modinput.jms.JMSModularInput$MessageReceiver.run(Unknown Source)
11-18-2015 16:33:06.151 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py""  at com.splunk.modinput.jms.JMSModularInput$MessageReceiver.streamMessageEvent(Unknown Source)
11-18-2015 16:33:06.151 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py""  at com.splunk.modinput.ModularInput.marshallObjectToXML(Unknown Source)
11-18-2015 16:33:06.151 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py""  at java.lang.StringBuilder.toString(Unknown Source)
11-18-2015 16:33:06.151 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py""  at java.lang.String.<init>(Unknown Source)
11-18-2015 16:33:06.151 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py""  at java.util.Arrays.copyOfRange(Unknown Source)
11-18-2015 16:33:06.151 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py"" Exception in thread "Thread-3" java.lang.OutOfMemoryError: Java heap space
11-18-2015 16:33:05.792 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py""  INFO Streaming message to Splunk for indexing

We have 3 JMS inputs running on this instance; the other 2 continue to work after the first one experiences issues. We have tried increasing the java_args stanza in jms.py to use "-Xms256m","-Xmx256m" but this has not helped to resolve the issue. Any help with this would be appreciated.

0 Karma
1 Solution

Damien_Dallimor
Ultra Champion

Have you tried larger than 256 ?

View solution in original post

0 Karma

Damien_Dallimor
Ultra Champion

Have you tried larger than 256 ?

0 Karma

bdahlb
Explorer

We had upped this line to 256MB already but apparently even that was too low for what we were doing; increasing it to 512MB resolved our issue.

0 Karma
Get Updates on the Splunk Community!

Splunk Observability for AI

Don’t miss out on an exciting Tech Talk on Splunk Observability for AI!Discover how Splunk’s agentic AI ...

Splunk Enterprise Security 8.x: The Essential Upgrade for Threat Detection, ...

Watch On Demand the Tech Talk, and empower your SOC to reach new heights! Duration: 1 hour  Prepare to ...

Splunk Observability as Code: From Zero to Dashboard

For the details on what Self-Service Observability and Observability as Code is, we have some awesome content ...