Splunk Search

Search results might be incomplete: the search process on the local peer:%s ended prematurely?

RichieH
Explorer

Hi All,

When running a search the following error will appear in the job inspector. Users get this message intermittently on searches. No results can be returned.

10-18-2022 11:00:22.349 ERROR DispatchThread [3247729 phase_1] - code=10 error=""
10-18-2022 11:00:22.349 ERROR ResultsCollationProcessor [3247729 phase_1] - SearchMessage orig_component= sid=1666090813.341131_7E89B3C6-34D5-44DA-B19C-E6A755245D39 message_key=DISPATCHCOMM:PEER_PIPE_EXCEPTION__%s message=Search results might be incomplete: the search process on the peer:pldc1splindex1 ended prematurely. Check the peer log, such as $SPLUNK_HOME/var/log/splunk/splunkd.log and as well as the search.log for the particular search.

 The message.conf shows

[DISPATCHCOMM:PEER_PIPE_EXCEPTION__S]
message = Search results might be incomplete: the search process on the local peer:%s ended prematurely.
action = Check the local peer log, such as $SPLUNK_HOME/var/log/splunk/splunkd.log and as well as the search.log for the particular search.
severity = warn

I also have Splunk Alerts that are showing false positives, the alert search is retuning no results but the Splunk sourcetype=scheduler is sending out emails with success? 

Is this related?

What does this mean? PEER_PIPE_EXCEPTION__S

Splunk Enterprise OnPrem version 9.0.1 on a distributed environment.

Thanks

Labels (1)
0 Karma
1 Solution

richgalloway
SplunkTrust
SplunkTrust

It could be a memory issue.  Check /var/log/messages on the peer for OOM Killer events.

---
If this reply helps you, Karma would be appreciated.

View solution in original post

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Did you look at splunkd.conf on the peer as well as search.log like the error suggested?  What did you find there?

Messages.conf is not a troubleshooting aid.  It's for assigning severities to log messages.  "PEER_PIPE_EXCEPTION__S" identifies the type of error encountered.

---
If this reply helps you, Karma would be appreciated.
0 Karma

RichieH
Explorer

I found this in the splunkd.log on one of the splunk indexers at the time of the error message

10-18-2022 11:00:17.141 +0000 ERROR SearchProcessRunner [2379030 PreforkedSearchesManager-0] - preforked process=0/437059 hung up
10-18-2022 11:00:17.163 +0000 WARN  SearchProcessRunner [2379030 PreforkedSearchesManager-0] - preforked process=0/437059 status=killed, signum=9, signame="Killed", coredump=0, utime_sec=1.672967, stime_sec=0.285628, max_rss_kb=207912, vm_minor=72863, fs_r_count=6352, fs_w_count=456, sched_vol=407, sched_invol=1431

Is this a Swap memory issue?

0 Karma

richgalloway
SplunkTrust
SplunkTrust

It could be a memory issue.  Check /var/log/messages on the peer for OOM Killer events.

---
If this reply helps you, Karma would be appreciated.
0 Karma

RichieH
Explorer

Indeed there was such messages in DMESG on the Indexers. 

I've had to Disable Swap Memory :  sqapoff -a 

and done a rolling restart across the indexers. 

Thanks for your time on this.

Get Updates on the Splunk Community!

Federated Search for Amazon S3 | Key Use Cases to Streamline Compliance Workflows

Modern business operations are supported by data compliance. As regulations evolve, organizations must ...

New Dates, New City: Save the Date for .conf25!

Wake up, babe! New .conf25 dates AND location just dropped!! That's right, this year, .conf25 is taking place ...

Introduction to Splunk Observability Cloud - Building a Resilient Hybrid Cloud

Introduction to Splunk Observability Cloud - Building a Resilient Hybrid Cloud  In today’s fast-paced digital ...