Hello,
We're running JMS Modular Input (jms_ta) v1.3.7 on the Heavy Forwarder on Linux VM. We have two Heavy Forwarders with jms_ta running on each. There's no customization done on the jms_ta. We noticed the message ingestion rate from the two queues is between 20-30 messages a second in each queue whether we use only one or two jms_ta instances against the same queues. Although running two jms_ta against the same queues took half the time less than running one jms_ta to drain all the messages from the queues, the WatchQ monitor showed the ingestion rate was the same.
We'd like to increase the ingestion rate as high as possible. Is there any parameter in jms_ta/heavy forwarder/OS/indexer/pipeline to increase the ingestion rate?
We tried changing the following parameters, but there's no change in the ingestion rate.
Any suggestions?
Thank you.
For scaling out throughput for polling messages from queues (not topics) , then the recommended approach is to scale horizontally by deploying (n) JMS Modular Inputs across (n) Splunk Forwarders (Heavy or Universal) , and forwarding the data into an Indexer cluster.
Check out this preso from slide 20 :
http://www.slideshare.net/damiendallimore/splunk-conf-2014-getting-the-message
Adding more stanzas within a single JMS Modular Input instance will soon hit limits because each of the stanzas is just a thread in the same JVM (addresses your points 5,6 above). So that is why I recommend the multiple JMS Mod Inputs across multiple forwarders.
Furthermore , a single JMS Modular Input instance will likely hit a bottleneck in the STDOUT/STDIN OS Buffer between the Modular Input Process (writing to STD OUT) and the Splunk Forwarder Instance (reading from STD IN) , which may lead to blocking in the JMS Mod Input's queue poller logic.
So ,scale out horizontally 🙂
For scaling out throughput for polling messages from queues (not topics) , then the recommended approach is to scale horizontally by deploying (n) JMS Modular Inputs across (n) Splunk Forwarders (Heavy or Universal) , and forwarding the data into an Indexer cluster.
Check out this preso from slide 20 :
http://www.slideshare.net/damiendallimore/splunk-conf-2014-getting-the-message
Adding more stanzas within a single JMS Modular Input instance will soon hit limits because each of the stanzas is just a thread in the same JVM (addresses your points 5,6 above). So that is why I recommend the multiple JMS Mod Inputs across multiple forwarders.
Furthermore , a single JMS Modular Input instance will likely hit a bottleneck in the STDOUT/STDIN OS Buffer between the Modular Input Process (writing to STD OUT) and the Splunk Forwarder Instance (reading from STD IN) , which may lead to blocking in the JMS Mod Input's queue poller logic.
So ,scale out horizontally 🙂