Been trying for a couple of days and haven't been able to get this working, before I raise a support ticket I wanted to try the forumns, below is an explanation of my problem:
i've got the following configuration within my limits.conf files:
$SPLUNK_HOME/etc/system/local/limits.conf
$SPLUNK_HOME/etc/apps/SplunkUniversalForwarder/default/limits.conf
# Version 6.2.4
[thruput]
maxKBps = 256
However I am seeing MUCH higher throughputs reaching splunk from all my forwarders, after following some advice on various answers I ran the following command:
$SPLUNK_HOME\bin\splunk cmd btool limits list thruput
[thruput]
maxKBps = 256
This confirmed my throughput was set correctly and was being picked up by Splunk application.
Im at a total loss, the search I am running to retrieve the maximum and average throughput is:
index=_internal | where source LIKE "%metrics.log" | where tcp_KBps > 0 | table _time,host, tcp_KBps, tcp_avg_thruput | sort tcp_KBps DESC
Also when I grep splunkd.log for "ThruputProcessor" to determine if the throughput is being exceeded no results are found (implying its not being exceeded).
Any ideas on how to get this setting applied correctly? Causing me a lot of headaches.
Turns out the limit was being applied, Splunk does not apply as hard limit when applying the setting maxKBps instead it will look at the thruput and throttle back traffic if you start going above the limit - as such you will naturally see above that amount for a certain period or time then it will stabilise.
Turns out the limit was being applied, Splunk does not apply as hard limit when applying the setting maxKBps instead it will look at the thruput and throttle back traffic if you start going above the limit - as such you will naturally see above that amount for a certain period or time then it will stabilise.
Hi LewisWheeler,
I have the same extact issue. what is the resolution that you have got it for this.
Can you please help me on this.
Thanks,
Ramu Chittiprolu
Please try this search ( from Splunk on Splunk )
index=_internal source=*metrics.log group=tcpout_connections
| eval kb=(tcp_Bps*30)/1024
| timechart sum(eval(kb/1024)) as MB
Can I ask why we don't use: tcp_KBps instead?
index=_internal source=*metrics.log group=tcpout_connections
| eval kb=(tcp_KBps*30)
| timechart sum(eval(kb/1024)) as MB
This gives me a result of 23mb for a particular forwarder during a 30 second interval ~766 KBps, or am I reading this wrong?
Like others said. What is the throuput your seeing per forwarder?
256KB/s = 2Mbit.
Can you point me to what comment has asked for this, or am I missing something?
Ranges however its topped out at 1,700 KBps ~1.7mbps
i believe that 256 is the default. (http://docs.splunk.com/Documentation/Splunk/6.2.5/Forwarding/Introducingtheuniversalforwarder)
have you tried with other value?
Changed the limit to: maxKBps=300 and restarted Splunk - sent a 25mb file through and no luck. Still high thruput (842 KBps over a 25mb file).
Note that this measurement is in kilo bytes per second, whereas the throughput you are recording may be in kilo bits per second, which would be 8 times that rate.
The throughput I am recording is the data from metrics.log - I highly doubt Splunk would be recording a metric in kbps (kilo-bytes per second) but allowing you to limit in KBps(Kilobytes per second). But thanks for the thought - made me go and look and at this stage any idea is a good idea!
UPDATE:
Note: In thruput lingo, "kbps" does not mean kilobits per second, it means kilobytes per second. The industry standard term would be to write this something like KBps.
(http://docs.splunk.com/Documentation/Splunk/6.2.5/Troubleshooting/Aboutmetricslog)