Here is the error: [QuartzScheduler_Worker-23] ERROR org.easybatch.core.job.BatchJob - Unable to write records java.net.SocketException: Broken pipe (Write failed)
I have an input that queries the vSMS_R_Systems table in SCCM. It is a simple SELECT *. The instance I am querying only has 8,300+ records, about 50 attributes (mix of INT,CHAR,DATETIME). I can retrieve the records when just doing a search (i.e. "| dbxquery connection= query="SELECT * FROM vSMS_R_System"), but when I create a batch input with this query it fails with the above error. I have the query set to retrieve a maximum of 1,000 rows fetching 100 rows at a time. Looking through the logs, it appears as though the query succeeds, but it cannot write to the index.
I set the maximum rows to 100 and the query worked and the data was written to the index. However, I will be having tables that will be batch inputs with hundreds of thousands of rows.
Has there been other reports of this happening? Is this something to do with server hardware? Does Splunk try to put all of the results of a query into memory before writing? Or have I configured my input wrong?
I appreciate any help!
EDIT FOR CLARITY: This is an SCCM instance that I am querying, and the backend is MSSQL Server 2016.
... View more