All Apps and Add-ons

Would adding more Data Inputs need more CPU, memory or impact any other resources on the Forwarder instance?

flee
Path Finder

Hi. We'll need to add more JMS Messaging Data Input (JMS Modular Input v1.3.7) to connect to different queues from different MQ providers. We'll have probably 7 or 8 providers. The JMS Modular Input is installed on a heavy Forwarder instance that's sending ingested events to the indexers on different servers. Would adding more Data Inputs need more CPU, memory or impact any other resources on this Forwarder instance?

0 Karma
1 Solution

Damien_Dallimor
Ultra Champion

Presuming you are just using the JMS Modular Input out of the box (and haven't created a custom message handler for pre processing / custom handling) :

1) multiple JMS stanzas will run in the same JVM. So there won't be any change to memory consumption unless you manually increase the JVM heap size in jms_ta/bin/jms.py

2) more JMS stanzas will linearly use more CPU, as more threads are fired up to connect to the different MQ providers. But CPU usage should still be pretty low , there aren't any compute intensive algorithms.

The resource bottleneck , depending on what sort of message throughput volume you are processing is usually the STDIN/STDOUT buffer between the Modular Input process (writes to STDOUT) and the Splunk Heavy Forwarder process(reads form STDIN)

To achieve scale , I generally recommend on scaling out horizontally with multiple JMS Modular Input instances on multiple forwarders (Universal or Heavy).

See slides 20/21 here : http://www.slideshare.net/damiendallimore/splunk-conf-2014-getting-the-message

View solution in original post

Damien_Dallimor
Ultra Champion

Presuming you are just using the JMS Modular Input out of the box (and haven't created a custom message handler for pre processing / custom handling) :

1) multiple JMS stanzas will run in the same JVM. So there won't be any change to memory consumption unless you manually increase the JVM heap size in jms_ta/bin/jms.py

2) more JMS stanzas will linearly use more CPU, as more threads are fired up to connect to the different MQ providers. But CPU usage should still be pretty low , there aren't any compute intensive algorithms.

The resource bottleneck , depending on what sort of message throughput volume you are processing is usually the STDIN/STDOUT buffer between the Modular Input process (writes to STDOUT) and the Splunk Heavy Forwarder process(reads form STDIN)

To achieve scale , I generally recommend on scaling out horizontally with multiple JMS Modular Input instances on multiple forwarders (Universal or Heavy).

See slides 20/21 here : http://www.slideshare.net/damiendallimore/splunk-conf-2014-getting-the-message

flee
Path Finder

Thank you.

0 Karma

esix_splunk
Splunk Employee
Splunk Employee

Is your system currently constrained for MEM or CPU? As you add more inputs, they do use more system resources. So if you are trying to add to a system that has resource constraints already then you will most likely have issues.

If your system as enough resources, you can add more inputs.

0 Karma

flee
Path Finder

Our environment is new. We'll have 2 heavy Forwarders with JMS Mod Input installed. Each Forwarder has 4 cores CPU, 4GB RAM and 40GB storage. I'm trying to figure out which resource (CPU, memory) would be impacted as we add more Queues/Providers and be ready to increase those impacted resource as needed.

0 Karma

esix_splunk
Splunk Employee
Splunk Employee

CPU and memory would be of concern. Whats the current state? Base current utilisation as your metric, e.g., number of inputs, and scale based on that.

If you have 20 JMX inputs, and you're seeing 95% utilisation, adding 7 more providers will most likely cause system resource issues. But if you have 20 inputs and are only at 40% utilisation, you should be able to sale you can scale up.

Since these machines are not 100% cpu / memory dedicated to Splunk (think OS and other programs running,) its hard to scale without knowing the historical utilisation of the boxes.

0 Karma

flee
Path Finder

Thank you.

0 Karma
Get Updates on the Splunk Community!

Webinar Recap | Revolutionizing IT Operations: The Transformative Power of AI and ML ...

The Transformative Power of AI and ML in Enhancing Observability   In the realm of IT operations, the ...

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...