Getting Data In

TCP Buffer Options

ezajac
Path Finder

We are using a Cloud Foundry for an Internal Cloud Implementation. We are migrating applications and hence using TCP streaming versus the Splunk Forwarding Agent to stream logs. The Cloud Support Team is observing errors logged because the TCP stream is backing up. I am using a vanilla TCP configuration. Are there extra configurations I can add to the stanza to increase the buffer to prevent messages from dropping?

Sample Vanilla TCP configuration:
[tcp://3301]
connection_host = dns
index = index_name
sourcetype = rfc5424_syslog

Logs from the Cloud Foundry Doppler Service:
{"timestamp":1474911116.219854593,"process_id":17528,"source":"doppler","log_level":"warn","message":"TB: Output channel too full","data":{"appId":"d41c5c78-955a-4148-b575-cf868dc0b6fe","destination":"syslog://tlaloga1.dev.prod.travp.net:3303","dropped":99,"total_dropped":496},"file":"/var/vcap/data/compile/doppler/loggregator/src/truncatingbuffer/truncating_buffer.go","line":112,"method":"truncatingbuffer.(*TruncatingBuffer).forwardMessage"}
{"timestamp":1474911142.510904074,"process_id":17528,"source":"doppler","log_level":"warn","message":"TB: Output channel too full","data":{"appId":"d41c5c78-955a-4148-b575-cf868dc0b6fe","destination":"syslog://tlaloga1.dev.prod.travp.net:3303","dropped":99,"total_dropped":595},"file":"/var/vcap/data/compile/doppler/loggregator/src/truncatingbuffer/truncating_buffer.go","line":112,"method":"truncatingbuffer.(*TruncatingBuffer).forwardMessage"}
{"timestamp":1474911161.138316393,"process_id":17528,"source":"doppler","log_level":"warn","message":"TB: Output channel too

Tags (2)
0 Karma

ahev
New Member

Rather than increase the buffer size you can scale the number of Dopplers available. Below is a link to a guide to how many Dopplers and Traffic Controllers to plan for.

https://discuss.pivotal.io/hc/en-us/articles/225564028-How-to-Calculate-the-Loggregators-Message-Thr... than scaling the buffer size the approach we recommend is scaling the number of dopplers. See https://discuss.pivotal.io/hc/en-us/articles/225564028-How-to-Calculate-the-Loggregators-Message-Thr... for a guide to scaling more dopplers (and possibly Traffic Controllers) as well.

0 Karma
Get Updates on the Splunk Community!

Index This | What is broken 80% of the time by February?

December 2025 Edition   Hayyy Splunk Education Enthusiasts and the Eternally Curious!    We’re back with this ...

Unlock Faster Time-to-Value on Edge and Ingest Processor with New SPL2 Pipeline ...

Hello Splunk Community,   We're thrilled to share an exciting update that will help you manage your data more ...

Splunk MCP & Agentic AI: Machine Data Without Limits

Discover how the Splunk Model Context Protocol (MCP) Server can revolutionize the way your organization uses ...