Getting Data In

TCP Buffer Options

ezajac
Path Finder

We are using a Cloud Foundry for an Internal Cloud Implementation. We are migrating applications and hence using TCP streaming versus the Splunk Forwarding Agent to stream logs. The Cloud Support Team is observing errors logged because the TCP stream is backing up. I am using a vanilla TCP configuration. Are there extra configurations I can add to the stanza to increase the buffer to prevent messages from dropping?

Sample Vanilla TCP configuration:
[tcp://3301]
connection_host = dns
index = index_name
sourcetype = rfc5424_syslog

Logs from the Cloud Foundry Doppler Service:
{"timestamp":1474911116.219854593,"process_id":17528,"source":"doppler","log_level":"warn","message":"TB: Output channel too full","data":{"appId":"d41c5c78-955a-4148-b575-cf868dc0b6fe","destination":"syslog://tlaloga1.dev.prod.travp.net:3303","dropped":99,"total_dropped":496},"file":"/var/vcap/data/compile/doppler/loggregator/src/truncatingbuffer/truncating_buffer.go","line":112,"method":"truncatingbuffer.(*TruncatingBuffer).forwardMessage"}
{"timestamp":1474911142.510904074,"process_id":17528,"source":"doppler","log_level":"warn","message":"TB: Output channel too full","data":{"appId":"d41c5c78-955a-4148-b575-cf868dc0b6fe","destination":"syslog://tlaloga1.dev.prod.travp.net:3303","dropped":99,"total_dropped":595},"file":"/var/vcap/data/compile/doppler/loggregator/src/truncatingbuffer/truncating_buffer.go","line":112,"method":"truncatingbuffer.(*TruncatingBuffer).forwardMessage"}
{"timestamp":1474911161.138316393,"process_id":17528,"source":"doppler","log_level":"warn","message":"TB: Output channel too

Tags (2)
0 Karma

ahev
New Member

Rather than increase the buffer size you can scale the number of Dopplers available. Below is a link to a guide to how many Dopplers and Traffic Controllers to plan for.

https://discuss.pivotal.io/hc/en-us/articles/225564028-How-to-Calculate-the-Loggregators-Message-Thr... than scaling the buffer size the approach we recommend is scaling the number of dopplers. See https://discuss.pivotal.io/hc/en-us/articles/225564028-How-to-Calculate-the-Loggregators-Message-Thr... for a guide to scaling more dopplers (and possibly Traffic Controllers) as well.

0 Karma
Get Updates on the Splunk Community!

Aligning Observability Costs with Business Value: Practical Strategies

 Join us for an engaging Tech Talk on Aligning Observability Costs with Business Value: Practical ...

Mastering Data Pipelines: Unlocking Value with Splunk

 In today's AI-driven world, organizations must balance the challenges of managing the explosion of data with ...

Splunk Up Your Game: Why It's Time to Embrace Python 3.9+ and OpenSSL 3.0

Did you know that for Splunk Enterprise 9.4, Python 3.9 is the default interpreter? This shift is not just a ...