Splunk Search

Search results might be incomplete: the search process on the local peer:%s ended prematurely?

RichieH
Explorer

Hi All,

When running a search the following error will appear in the job inspector. Users get this message intermittently on searches. No results can be returned.

10-18-2022 11:00:22.349 ERROR DispatchThread [3247729 phase_1] - code=10 error=""
10-18-2022 11:00:22.349 ERROR ResultsCollationProcessor [3247729 phase_1] - SearchMessage orig_component= sid=1666090813.341131_7E89B3C6-34D5-44DA-B19C-E6A755245D39 message_key=DISPATCHCOMM:PEER_PIPE_EXCEPTION__%s message=Search results might be incomplete: the search process on the peer:pldc1splindex1 ended prematurely. Check the peer log, such as $SPLUNK_HOME/var/log/splunk/splunkd.log and as well as the search.log for the particular search.

 The message.conf shows

[DISPATCHCOMM:PEER_PIPE_EXCEPTION__S]
message = Search results might be incomplete: the search process on the local peer:%s ended prematurely.
action = Check the local peer log, such as $SPLUNK_HOME/var/log/splunk/splunkd.log and as well as the search.log for the particular search.
severity = warn

I also have Splunk Alerts that are showing false positives, the alert search is retuning no results but the Splunk sourcetype=scheduler is sending out emails with success? 

Is this related?

What does this mean? PEER_PIPE_EXCEPTION__S

Splunk Enterprise OnPrem version 9.0.1 on a distributed environment.

Thanks

Labels (1)
0 Karma
1 Solution

richgalloway
SplunkTrust
SplunkTrust

It could be a memory issue.  Check /var/log/messages on the peer for OOM Killer events.

---
If this reply helps you, Karma would be appreciated.

View solution in original post

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Did you look at splunkd.conf on the peer as well as search.log like the error suggested?  What did you find there?

Messages.conf is not a troubleshooting aid.  It's for assigning severities to log messages.  "PEER_PIPE_EXCEPTION__S" identifies the type of error encountered.

---
If this reply helps you, Karma would be appreciated.
0 Karma

RichieH
Explorer

I found this in the splunkd.log on one of the splunk indexers at the time of the error message

10-18-2022 11:00:17.141 +0000 ERROR SearchProcessRunner [2379030 PreforkedSearchesManager-0] - preforked process=0/437059 hung up
10-18-2022 11:00:17.163 +0000 WARN  SearchProcessRunner [2379030 PreforkedSearchesManager-0] - preforked process=0/437059 status=killed, signum=9, signame="Killed", coredump=0, utime_sec=1.672967, stime_sec=0.285628, max_rss_kb=207912, vm_minor=72863, fs_r_count=6352, fs_w_count=456, sched_vol=407, sched_invol=1431

Is this a Swap memory issue?

0 Karma

richgalloway
SplunkTrust
SplunkTrust

It could be a memory issue.  Check /var/log/messages on the peer for OOM Killer events.

---
If this reply helps you, Karma would be appreciated.
0 Karma

RichieH
Explorer

Indeed there was such messages in DMESG on the Indexers. 

I've had to Disable Swap Memory :  sqapoff -a 

and done a rolling restart across the indexers. 

Thanks for your time on this.

Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In November, the Splunk Threat Research Team had one release of new security content via the Enterprise ...

Index This | Divide 100 by half. What do you get?

November 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...

Stay Connected: Your Guide to December Tech Talks, Office Hours, and Webinars!

❄️ Celebrate the season with our December lineup of Community Office Hours, Tech Talks, and Webinars! ...