All Apps and Add-ons
Highlighted

How to Increase Message Ingestion Rate Using JMS Modular Input

Path Finder

Hello,

We're running JMS Modular Input (jmsta) v1.3.7 on the Heavy Forwarder on Linux VM. We have two Heavy Forwarders with jmsta running on each. There's no customization done on the jmsta. We noticed the message ingestion rate from the two queues is between 20-30 messages a second in each queue whether we use only one or two jmsta instances against the same queues. Although running two jmsta against the same queues took half the time less than running one jmsta to drain all the messages from the queues, the WatchQ monitor showed the ingestion rate was the same.

We'd like to increase the ingestion rate as high as possible. Is there any parameter in jms_ta/heavy forwarder/OS/indexer/pipeline to increase the ingestion rate?

We tried changing the following parameters, but there's no change in the ingestion rate.

  1. set the maxKBps to 0
  2. set the maxQueueSize to 256MB
  3. enabled connection pooling in the .bindings file
  4. changed the batch message size from the default in the .bindings file
  5. increased the JMS Messaging Modular Input JVM heap to 256MB
  6. created more than one input definition with different names but for the same queue

Any suggestions?

Thank you.

Highlighted

Re: How to Increase Message Ingestion Rate Using JMS Modular Input

Ultra Champion

For scaling out throughput for polling messages from queues (not topics) , then the recommended approach is to scale horizontally by deploying (n) JMS Modular Inputs across (n) Splunk Forwarders (Heavy or Universal) , and forwarding the data into an Indexer cluster.

Check out this preso from slide 20 :

http://www.slideshare.net/damiendallimore/splunk-conf-2014-getting-the-message

Adding more stanzas within a single JMS Modular Input instance will soon hit limits because each of the stanzas is just a thread in the same JVM (addresses your points 5,6 above). So that is why I recommend the multiple JMS Mod Inputs across multiple forwarders.
Furthermore , a single JMS Modular Input instance will likely hit a bottleneck in the STDOUT/STDIN OS Buffer between the Modular Input Process (writing to STD OUT) and the Splunk Forwarder Instance (reading from STD IN) , which may lead to blocking in the JMS Mod Input's queue poller logic.

So ,scale out horizontally 🙂

View solution in original post