Hi,
I have created a splunk email and it seems to be triggering it twice. Below the query and alert configuration.
query:
index="liquidity" AND cf_space_name="pvs-ad00008034" AND (msg.Extended_Fields.ValueAmount = "0" OR msg.Extended_Fields.ValueAmount = "NULL" OR msg.Results.Message="EWI Load process is completed*") | table _time, msg.Extended_Fields.DataSource, msg.Extended_Fields.ValueAmount, msg.Results.Message | sort by _time | rename msg.Extended_Fields.ValueAmount as ValueAmount | rename msg.Results.Message as Message | rename msg.Extended_Fields.DataSource as DataSource
trigger condition:
search Message = "EWI Load process is completed*" | stats count as Total | search Total > 0
Hi @mm185429 were you able to find the solution, I too facing the issue the same but mine is a Splunk report we have custom alert action for mailing purpose and made it pull mail contacts from a lookup and lookup contained two DLs. I re-run the report with my email it received once only for now I have cloned the report and ask user to check if they are receiving again since the actual report should once a day.
I checked in the intenal logs can see two mail were sent out at same time but there is only one report which is scheduled to run once a day.
How do you know that the alert is triggered twice? Because you get double email? Do all the recipients get double email or is it just you? Are you sure it's not that there are some redirections in your email system?
@PickleRick The issue is intermittent. The double email is sent to all the recipients. There is no redirection from the email system as far as I know.
Ok. And it's the same email - from the same scheduled search run? Not from two separate ones?
Is it a standalone SH or a cluster? Does searching for sendemail.py yields single send per alert or double?
@PickleRick Yes, It is the same email from the same scheduled search run and not from two separate ones.
I'm not sure if it is standalone SH or a cluster since this is managed by another team in my organization. How can I search for sendemail.py?
To be honest, if it's not "your" environment (I mean you're not administering it), I'd just create a ticket with your splunk admin team because you probably don't have enough permissions to troubleshoot it on your own.
You could try to search for
index=_internal sendemail.py
Around the time that your alert was triggered but typically non-admin users don't have access to internal indexes.