Getting Data In

TCP data input: Why is splunk receiving only some data? Is there a limit that needs to be configured?

jimrantoday
Explorer

Hello team,

When sent data from my cloudbees syslog java client using tcp data input, only some data is making it to splunk. Exactly 206 records are making it to Splunk even though I am sending data in excess to 1000 records. Is there a limit that needs to be configured? please advise.

Thank you.

1 Solution

jimrantoday
Explorer

Adding

[tcp://5540]
queueSize = 5KB
persistentQueueSize = 10KB

in inputs.conf at $SPLUNK_HOME/etc/apps/"your app name"/local location did the trick for me.

After adding the configuration, you can also check to see if they got picked up or not by typing "splunk cmd btool --app=search inputs list" command in command prompt at Splunk\bin

View solution in original post

jkat54
SplunkTrust
SplunkTrust

@jimrantoday,

Btool shows the configuration that exists on disk, not what's loaded into memory.

Appreciate the credit for all the help...

It's like you came up with the answer all on your own...

0 Karma

jimrantoday
Explorer

Adding

[tcp://5540]
queueSize = 5KB
persistentQueueSize = 10KB

in inputs.conf at $SPLUNK_HOME/etc/apps/"your app name"/local location did the trick for me.

After adding the configuration, you can also check to see if they got picked up or not by typing "splunk cmd btool --app=search inputs list" command in command prompt at Splunk\bin

gcusello
SplunkTrust
SplunkTrust

Hi jimrantoday,
have you problems of disk performace?
One of my customer had this problem because used very slow disks so Indexers didn't index all data and someone of them were loosed.
You can verify this using a simple search

index=_internal source=*metrics.log sourcetype=splunkd group=queue host=”your_indexer” blocked | timechart count by name

You can verify disk performances using an external tool (like Bonnie++), Splunk requires at least 800 iops.

To avoid this problem I configured a persistent queue on my indexer:
in /opt/splunk/etc/apps/search/local/inputs.conf file, in udp and tcp rooms, I inserted persistentQueueSize = 10MB parameter.

Bye.
Giuseppe

jkat54
SplunkTrust
SplunkTrust

Tcp and UDP inputs have a receive buffer that doesn't flush to disk until they overflow or Splunk restarts.

Check out _rcv_buffer (I think it's called) in inputs.conf

0 Karma

jkat54
SplunkTrust
SplunkTrust

Try this instead:

[tcp://5540]
queueSize= 1KB

Or

[tcp:5540]
queueSize= 1KB

OR
[tcp://*:5540]
queueSize= 1KB

0 Karma

jkat54
SplunkTrust
SplunkTrust

Yeah ok it's queueSize but try something much smaller like 1KB

0 Karma

jimrantoday
Explorer
  1. Increased the limits to see if there is an increase in no.of events indexed in Splunk.

->added the following configuration in \system\local\inputs.conf
[tcp://5540]
queueSize = 50MB
persistentQueueSize = 100MB

->Restarted Splunk
-> Ran the test and still the no.of events remains 206 (where as the events available to send via TCP are 6500).

*Note:- When i ran "splunk cmd btool --app=search inputs list", my output was
[splunktcp://9997]
connection_host = ip
[tcp://5540]
connection_host = dns
disabled = 0
sourcetype = syslog

Looks like my config changes have not been picked up even after the restart.

  1. Decreased the limits to 1KB and the no.of events remain the same (206).

What could possibly be the issue here ?

0 Karma

jimrantoday
Explorer

I have searched the inputs.conf documentation and couldn't find anything related to out_rcv_buffer. I tried changing the queueSize to 50mb and restarted the splunk instance but even that didn't help. Could you please check to find the property?

0 Karma
Get Updates on the Splunk Community!

Introduction to Splunk Observability Cloud - Building a Resilient Hybrid Cloud

Introduction to Splunk Observability Cloud - Building a Resilient Hybrid Cloud  In today’s fast-paced digital ...

Observability protocols to know about

Observability protocols define the specifications or formats for collecting, encoding, transporting, and ...

Take Your Breath Away with Splunk Risk-Based Alerting (RBA)

WATCH NOW!The Splunk Guide to Risk-Based Alerting is here to empower your SOC like never before. Join Haylee ...