We receive an OutOfMemoryError: Java heap space
error after larger messages are read from queue. This is followed by the individual queue no longer being read into Splunk. We have been able to reproduce this a few times by dropping a ~13MB message into Websphere MQ.
Debug logs follow:
11-18-2015 16:33:06.151 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py"" at com.splunk.modinput.jms.JMSModularInput$MessageReceiver.run(Unknown Source)
11-18-2015 16:33:06.151 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py"" at com.splunk.modinput.jms.JMSModularInput$MessageReceiver.streamMessageEvent(Unknown Source)
11-18-2015 16:33:06.151 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py"" at com.splunk.modinput.ModularInput.marshallObjectToXML(Unknown Source)
11-18-2015 16:33:06.151 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py"" at java.lang.StringBuilder.toString(Unknown Source)
11-18-2015 16:33:06.151 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py"" at java.lang.String.<init>(Unknown Source)
11-18-2015 16:33:06.151 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py"" at java.util.Arrays.copyOfRange(Unknown Source)
11-18-2015 16:33:06.151 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py"" Exception in thread "Thread-3" java.lang.OutOfMemoryError: Java heap space
11-18-2015 16:33:05.792 -0600 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py"" INFO Streaming message to Splunk for indexing
We have 3 JMS inputs running on this instance; the other 2 continue to work after the first one experiences issues. We have tried increasing the java_args stanza in jms.py to use "-Xms256m","-Xmx256m" but this has not helped to resolve the issue. Any help with this would be appreciated.
We had upped this line to 256MB already but apparently even that was too low for what we were doing; increasing it to 512MB resolved our issue.