Splunk Search

Search hangs on "TcpOutbound" error

mh393
Loves-to-Learn

A scheduled search is hanging when it approaches around 28% completion. In search.log, the following message appears shortly before the search begins to experience issues

07-06-2021 07:21:30.846 WARN  SearchResultCollator - Collector xxx-xxxx-xxxx produced chunk with startTime 1625255401.000000 when our cursor time was already 1625255234.000000, time ordering has failed!
07-06-2021 07:21:31.273 INFO  ReducePhaseExecutor - ReducePhaseExecutor=1 action=PREVIEW
07-06-2021 07:21:31.284 INFO  SearchParser - PARSING: noop 
07-06-2021 07:21:31.284 INFO  DispatchExecutor - BEGIN OPEN: Processor=noop
07-06-2021 07:21:31.284 INFO  SearchEvaluator - Searched for keyword in results without _raw field
07-06-2021 07:21:31.284 INFO  DispatchExecutor - END OPEN: Processor=noop
07-06-2021 07:21:31.284 INFO  PreviewExecutor - Finished preview generation in 0.000866621 seconds.
07-06-2021 07:21:32.373 INFO  ReducePhaseExecutor - ReducePhaseExecutor=1 action=PREVIEW
07-06-2021 07:21:43.188 INFO  TcpOutbound - Received unexpected socket close condition with unprocessed data in RX buffer. Processing remaining bytes=11014 of data in RX buffer. socket_status="Connection closed by peer" paused=1

Appears to be related to this issue, but I don't see any additional log entries mentioning the ulimit value. 

Labels (1)
Tags (1)
0 Karma
Get Updates on the Splunk Community!

A Season of Skills: New Splunk Courses to Light Up Your Learning Journey

There’s something special about this time of year—maybe it’s the glow of the holidays, maybe it’s the ...

Announcing the Migration of the Splunk Add-on for Microsoft Azure Inputs to ...

Announcing the Migration of the Splunk Add-on for Microsoft Azure Inputs to Officially Supported Splunk ...

Splunk Observability for AI

Don’t miss out on an exciting Tech Talk on Splunk Observability for AI! Discover how Splunk’s agentic AI ...