Hi
I'm using the javasdk to create a Socket connection to a splunk index for posting events in a long running process.
The socket connection is kept open for the time the process is running, but I'm not able to see any events using the dashboard before the process is terminated, and the socket connection thereby is closed.
I have set tcp_nodelay to true, and flush after each write, but it doesn't seem to make any difference.
Is there any way to overcome this limitation without closing the socket after each write ?
If you are using the Receiver attach() method , this uses the receivers/stream REST endpoint in Splunk. There has always been some buffering on the Splunk side , it seems to be around 1MB before the events start to get index in my fuzzy tests , or closing the socket will also flush the buffer.
If you are using the TCPInput attach() method , then the events should show up in Splunk immediately. I recommend this approach.The main difference being that you'll need to setup a TCP Input in Splunk first.
If you are using the Receiver attach() method , this uses the receivers/stream REST endpoint in Splunk. There has always been some buffering on the Splunk side , it seems to be around 1MB before the events start to get index in my fuzzy tests , or closing the socket will also flush the buffer.
If you are using the TCPInput attach() method , then the events should show up in Splunk immediately. I recommend this approach.The main difference being that you'll need to setup a TCP Input in Splunk first.
Hi Damien
Good to know. I'll use TCPInput or rest for real time scenarios.