We are having some issues finalizing the installation of our Splunk environment. We have 2 Linux servers: 1 Search Head and 1 Indexer as search peer. We had just finished to set up the search peer in "Distributed search", so we tried to run a search "index=_internal sourcetype=splunkd" on the last 60 minutes but it only returned logs from the Indexer.
But then we realized that TailReader-0 on the Search Head was in error "The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data.". and the related messages such as:
08-19-2020 12:4650.607 +0200 WARN TailReader - Could not send data to output queue (parsingQueue), retrying...
This is weird because we configured the outputs.conf on the Search Head to send data to the Indexerand we configured inputs.conf of the Indexer to receive data so we are not sure what's wrong.
Both servers have been restarted, I guess the queues are full because the Search Head can't send the data, but why ?
The port 9997 is open and the connection from SH to IDX is fine. Also we don't have any forwarder or data input configured so it should not be because of a sudden burst of incoming data.
We restarted the Search Head after this, and now we are not able to run searches anymore, all searches return in error, the job inspector says "This search has encountered a fatal error and has been marked as zombied".
Could it be a performance issue ? Our servers have only 4CPU and 12GB of RAM. Do we need more CPU to solve these issues ?