We are getting The search job terminated unexpectedly error on pannel while running the dashboard, we have checked the resources on roles doesn't seems to be an issue on limitations. Below are the error we are getting on splunkd.log
07-19-2021 18:54:38.615 +0000 ERROR SHCMasterHTTPProxy - Low Level HTTP request failure err=failed method=POST path=/services/shcluster/captain/artifacts/lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search9_1626720743.295_70D6C089-B900-44A7-887A-6EAB416257C8/add_target captain=splunk-site1-search-head03:8089 rc=0 actual_response_code=500 expected_response_code=200 status_line="Internal Server Error" transaction_error="<response>\n <messages>\n <msg type="ERROR">failed on report target request aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search9_1626720743.295_70D6C089-B900-44A7-887A-6EAB416257C8 err='event=SHPMaster::addTarget aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search9_1626720743.295_70D6C089-B900-44A7-887A-6EAB416257C8 not found'</msg>\n </messages>\n</response>\n"
07-19-2021 18:52:27.882 +0000 ERROR SHCMasterHTTPProxy - Low Level HTTP request failure err=failed method=POST path=/services/shcluster/captain/artifacts/lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__RMD5f81a55cc52a3ee52_1626720743.187_5BAC1863-AF7D-4C2E-BFC3-AB06CDD436D5/add_target captain=splunk-site1-search-head03:8089 rc=0 actual_response_code=500 expected_response_code=200 status_line="Internal Server Error" transaction_error="<response>\n <messages>\n <msg type="ERROR">failed on report target request aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__RMD5f81a55cc52a3ee52_1626720743.187_5BAC1863-AF7D-4C2E-BFC3-AB06CDD436D5 err='event=SHPMaster::addTarget aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__RMD5f81a55cc52a3ee52_1626720743.187_5BAC1863-AF7D-4C2E-BFC3-AB06CDD436D5 not found'</msg>\n </messages>\n</response>\n"
07-19-2021 18:52:39.832 +0000 ERROR SHCMasterHTTPProxy - Low Level HTTP request failure err=failed method=POST path=/services/shcluster/captain/artifacts/lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search7_1626720743.296_70D6C089-B900-44A7-887A-6EAB416257C8/add_target captain=splunk-site1-search-head03:8089 rc=0 actual_response_code=500 expected_response_code=200 status_line="Internal Server Error" transaction_error="<response>\n <messages>\n <msg type="ERROR">failed on report target request aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search7_1626720743.296_70D6C089-B900-44A7-887A-6EAB416257C8 err='event=SHPMaster::addTarget aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search7_1626720743.296_70D6C089-B900-44A7-887A-6EAB416257C8 not found'</msg>\n </messages>\n</response>\n"
07-19-2021 18:52:52.889 +0000 ERROR SHCMasterHTTPProxy - Low Level HTTP request failure err=failed method=POST path=/services/shcluster/captain/artifacts/lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search10_1626720743.186_5BAC1863-AF7D-4C2E-BFC3-AB06CDD436D5/add_target captain=splunk-site1-search-head03:8089 rc=0 actual_response_code=500 expected_response_code=200 status_line="Internal Server Error" transaction_error="<response>\n <messages>\n <msg type="ERROR">failed on report target request aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search10_1626720743.186_5BAC1863-AF7D-4C2E-BFC3-AB06CDD436D5 err='event=SHPMaster::addTarget aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search10_1626720743.186_5BAC1863-AF7D-4C2E-BFC3-AB06CDD436D5 not found'</msg>\n </messages>\n</response>\n"
07-19-2021 18:52:52.950 +0000 ERROR SHCMasterHTTPProxy - Low Level HTTP request failure err=failed method=POST path=/services/shcluster/captain/artifacts/lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search3_1626720742.182_5BAC1863-AF7D-4C2E-BFC3-AB06CDD436D5/add_target captain=splunk-site1-search-head03:8089 rc=0 actual_response_code=500 expected_response_code=200 status_line="Internal Server Error" transaction_error="<response>\n <messages>\n <msg type="ERROR">failed on report target request aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search3_1626720742.182_5BAC1863-AF7D-4C2E-BFC3-AB06CDD436D5 err='event=SHPMaster::addTarget aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search3_1626720742.182_5BAC1863-AF7D-4C2E-BFC3-AB06CDD436D5 not found'</msg>\n </messages>\n</response>\n"
07-19-2021 18:54:38.615 +0000 ERROR SHCMasterHTTPProxy - Low Level HTTP request failure err=failed method=POST path=/services/shcluster/captain/artifacts/lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search9_1626720743.295_70D6C089-B900-44A7-887A-6EAB416257C8/add_target captain=splunk-site1-search-head03:8089 rc=0 actual_response_code=500 expected_response_code=200 status_line="Internal Server Error" transaction_error="<response>\n <messages>\n <msg type="ERROR">failed on report target request aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search9_1626720743.295_70D6C089-B900-44A7-887A-6EAB416257C8 err='event=SHPMaster::addTarget aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search9_1626720743.295_70D6C089-B900-44A7-887A-6EAB416257C8 not found'</msg>\n </messages>\n</response>\n"
Issue fixed, issue was on load balancer side as total load was not distributed across all searchheads.
Hi @bapun18
See if this link will help - Solved: Search Head Cluster SHCMasterArtifactHandler Error... - Splunk Community. Possibly SHC imbalance issue, reach out to Splunk support.
---
An upvote would be appreciated if this reply helps!
This is not the same error, please look at the error you have shared, Failed to trigger replication
Hi @bapun18
The solution having known issues link - Known issues - Splunk Documentation search for 'low level HTTP' there is an issue if your Product version is close to same. It's related to same service as your logs shows path=services*artifact etc..
You can check the SHC cluster status using following command to find out if any imbalances, and restart them in rollover mode it would fix in some cases. If issue still persist and no other options you can reach out splunk support.
./splunk show shcluster-status -auth <username>:<password>
Hope this helps!
We have already checked SH cluster status even tried rolling restart nothing is working, we are opening Splunk support case.
Issue fixed, issue was on load balancer side as total load was not distributed across all searchheads.