Deployment Architecture

Why are there errors on new Search Head Cluster member?

Explorer

I recently added a new host to my search head cluster and am receiving a continuous stream of errors as seen below from the new host. Any idea how I can determine what is causing these errors and how to fix them?

Interestingly, when I look at a count of the alerts, the number of alerts per hour has gone steadily down by about 5-10 per hour since they first started:
alt text

I also noticed that the error seems to reference 2 apps that don't currently show any data: NetApp and Palo Alto. I'm not sure if they ever displayed data or not as I have never used them, but I know that they have not displayed data for quite some time - long before these errors started. The "skipping" note in the error seems to indicate there is a lot more to the error than I can see, but I obviously don't know what so I'm not sure if other apps are referenced or not.

These are the steps I have tried to resolve the issue:

  • Rolling restart of the SHC
  • Remove, clean and re-add the newest member
  • I haven't seen any problems while using the latest member; searching works, dashboards work, etc.

Here is one of the errors:

index=_internal source="/opt/splunk/var/log/splunk/splunkd.log" "SHCMasterHTTPProxy - Low Level http request failure err=Deserialization failed."

02-12-2018 10:50:52.843 -0800 WARN SHCMasterHTTPProxy - Low Level http request failure err=Deserialization failed. Could not find expected key 'uniqueguidsartifactids' (Reply: ConfigInfo: feedname = , {\n CC2A8F3B-A392-4C0D-8914-F611CE068DFB -> ConfigItem: name=CC2A8F3B-A392-4C0D-8914-F611CE068DFB title= atomId= owner=system app= customActions={}; ArgsList: {artifactslocationcsv -> ParamType: _dataType=unset _isMultiValue=false {values: {[0]='"artifactid","artifactlogentry",peer,"mvartifactid","mvartifactlogentry","mv_peer"\n"scheduleradminpostfixRMD504f0506f29d1e837at1518456600225083142118D-D20E-4C18-B6EC-EE7B69A5F00B",0,"3142118D-D20E-4C18-B6EC-EE7B69A5F00B",,,\n"scheduleradminpostfixRMD504f0506f29d1e837at1518456600225083142118D-D20E-4C18-B6EC-EE7B69A5F00B",0,"F6E7F7FE-DC53-456F-B8EC-B624BAF5E1B4",,,\n"scheduleradminpostfixRMD504f0506f29d1e837at1518460200253142118D-D20E-4C18-B6EC-EE7B69A5F00B",0,"3142118D-D20E-4C18-B6EC-EE7B69A5F00B",,,\n"scheduleradminpostfixRMD504f0506f29d1e837at1518460200253142118D-D20E-4C18-B6EC-EE7B69A5F00B",0,"F6E7F7FE-DC53-456F-B8EC-B624BAF5E1B4",,,\n"scheduleradminpostfixRMD51d56dd48c3688be1at151845660026467F6E7F7FE-DC53-456F-B8EC-B624BAF5E1B4",0,"3142118D-D20E-4C18-B6EC-EE7B69A5F00B",,,\n"scheduleradminpostfixRMD51d56dd48c3688be1at151845660026467F6E7F7FE-DC53-456F-B8EC-B624BAF5E1B4",0,"F6E7F7FE-DC53-456F-B8EC-B624BAF5E1B4",,,\n"scheduleradminpostfixRMD51d56dd48c3688be1at15184602000CC2A8F3B-A392-4C0D-8914-F611CE068DFB",0,"314211 ...{skipping 103210 bytes}... appnetapp","tsidx-perf-system-ontap",1,1518461700,,,,,\nnobody,SplunkforPaloAltoNetworks,"WildFire Reports - Retrieve Report",1,1518461460,,,,,\nadmin,"splunkappnetapp","tsidx-perf-disk-ontap",1,1518461700,,,,,\nadmin,"splunkappnetapp","tsidx-perf-quota-ontap",1,1518461700,,,,,\nadmin,"splunkappnetapp","tsidx-perf-qtree-ontap",1,1518461700,,,,,\n'} (size=1)}, splunkminversion -> ParamType: dataType=unset _isMultiValue=false {values: {[0]='6.5.0'} (size=1)}, } _m.size=14\n Messages:\n}\n)

0 Karma