My org is moving away from PD to opsgenie, the documentation for opsgenie makes setup seem fairly quick. After doing the suggested steps and installed the app and added the api key and selected the correct region and then going over the internal index looking for opsgenie I keep seeing these errors:
WARN sendmodalert [24146 AlertNotifierWorker-0] - action=opsgenie - Alert action script returned error code=3
INFO sendmodalert [24146 AlertNotifierWorker-0] - action=opsgenie - Alert action script completed in duration=553 ms with exit code=3
ERROR sendmodalert [24146 AlertNotifierWorker-0] - action=opsgenie STDERR - Unexpected error: No credentials found. Could not get Opsgenie API Key.
The app holds the api key hashed in the local dir under the apps main dir so i know its saving the API key. Has anyone else run into this? I cant seem to get it to work no matter what I do. I have added the correct categories (list_key_storage) to the user roles and doesn't make a difference. any help would be appreciated!
Thanks!
It turned out that disabling Jamf Pro Add-on plugin for Splunk Cloud fixed Opsgenie plugin. I've opened a case with Atlassian support and they logged a corresponding bug: https://jira.atlassian.com/browse/OPSGENIE-1578
You can click on 'More > Add Vote' to flag that you are affected by this problem.
Having the exact same issue. Did you manage to figure this out yet?
Found it. It is Splunk Add-on for Google Workspace
Experiencing the exact same issue.
Reading it through the lines, I feel that 2 Add-Ons save their Tokens in the same way/same place, thus blocking eachother from finding the right token.. Note: I have yet to hear confirmation on this from Splunk Support.
You are saying 'Splunk Add-On for Google Workspace' was the cause, but what was your solution? Was it to disable the Google Workspace Add-On? Or how did you solve it?
I am working with SplunkCloud, so have no access to the back-end of the Add-Ons.
Can you share how did you manage to resolve this issue in Splunk Cloud? I run into same error, and Splunk Support hasn't been much of a help.
We did not.
Both OpsGenie Support (where a ticket was created aswell) and Splunk Support don't seem to be able to help much.
As a workaround we are using an e-mail integration. Basically Splunk sends e-mail to an inbox, which gets forwarded to OpsGenie. Obviously not ideal, but manageble till a fix is created by either Splunk or OpsGenie Support.
It is frustrating that is for sure. So yes, it was Google app that killed it so that is exactly what I did. I disabled Google on the SH/HF that I have all my alerts on with opsgenie now running on and then deployed a new SH/HF just to run Google so we still get the logs.
Matter of fact, I am in the middle of it. So its another APP we are using, not sure which one it is but I copied the whole /etc/apps to another search head that I had opsgenie working BEFORE I copied everything over. Soon as I copied I started getting the errors again. Disabled the the first 2 pages of apps and it started firing alerts off again. Now I am going back thru the first two pages and enabling one app at a time while watching _intneral logs to see when the error comes back