Alerting

Alert triggering in Splunk due to slowness

Motivator

we are alert in Splunk but when i checked, there is no issue.
as Splunk long time to search to query may be the reason.
Could anyone please give the suggestion

0 Karma
1 Solution

Can you tell us more about what you mean by "false alert is coming that is the issue due to slowness"? Are you saying your search takes so long to return that it times out, giving the false impression that there were no results? If so, maybe a solution would be to tune the query to ensure it never times out.

Based on your comments above, it seems like you're running this query:

index=index_days sourcetype=sourcetype_name "search sring" | stats count

And you want it to alert if there are 0 results returned, right? But you are getting alerts for times when you think it should have found results? If so, maybe try this:

index=index_days sourcetype=sourcetype_name "search sring" | head 1

That way, if there's a single result, it will find the first one and return immediately. That could help with a timeout.

View solution in original post

SplunkTrust
SplunkTrust

Hi @logloganathan. It seems like maybe you need some quicker feedback. For more direct help, please join the Splunk Slack channel via the form that is linked on the accepted answer on this page -
https://answers.splunk.com/answers/443734/is-there-a-splunk-slack-channel.html

On Slack, you can ask your question on the #n00b or #general channels, and people will chime in pretty quickly to help you.

Here, you can upvote any answers that you found particularly helpful. On the Slack channel, you can do something similar by typing @somebodysname++ (where somebodysname is their slack handle).

Motivator

Thanks DalJeanis

Can you tell us more about what you mean by "false alert is coming that is the issue due to slowness"? Are you saying your search takes so long to return that it times out, giving the false impression that there were no results? If so, maybe a solution would be to tune the query to ensure it never times out.

Based on your comments above, it seems like you're running this query:

index=index_days sourcetype=sourcetype_name "search sring" | stats count

And you want it to alert if there are 0 results returned, right? But you are getting alerts for times when you think it should have found results? If so, maybe try this:

index=index_days sourcetype=sourcetype_name "search sring" | head 1

That way, if there's a single result, it will find the first one and return immediately. That could help with a timeout.

View solution in original post

Motivator

wow exactly..same thing i want...Please enter the same in answer box

0 Karma

Motivator

thanks for the answer

0 Karma

Motivator

Could you please convert the same command to transforming command

index=index_days sourcetype=sourcetype_name "search sring" | head 1

0 Karma

Sure, but what's the goal of doing so? If we're just transforming for the sake of turning it into a table:

index=index_days sourcetype=sourcetype_name "search sring" 
| head 1
| stats values(*) AS *

or
index=index_days sourcetype=sourcetype_name "search sring"
| head 1
| stats count

Influencer

Can you show what search is the base for the alert?

Motivator

actually its log event..
index=index_days sourcetype=sourcetype_name "search sring" | stats count

it will trigger alert if table value less than 1
but it triggering when there is no issue

0 Karma

Motivator

waiting for the someone to provide the update

0 Karma

Path Finder

@logloganathan are you using any custom alert condition or are you using condition if number of results > 1?
Also, have you kept any throttling for the alert and do you want it to trigger it only once or for each result.

Let me know.

Motivator

if number of results > 1 then only once

0 Karma

What's the time window the search is using? Depending on the delay for data populating through the system, a window that is too short/recent might alert even though data is in the index pipeline and shows up in later searches for the same time window.

Motivator

i am using last 60 minutes

0 Karma

Ah, then that's not likely the issue. Is the search alerting every time it runs, or just sometimes? If it's every time, maybe the search is running with the wrong permissions or in the wrong app to actually gather the data expected.

Motivator

search not altering everytime

0 Karma

Motivator

waiting for the response.
Could anyone please update

0 Karma

If the search is failing to alert every single time, have you done the troubleshooting step of manually running the specific search as the user who is scheduled to run the alert inside the same app? I've made the mistake before of building an alert in one app and then saving/scheduling it in another and discovering it wasn't able to run as expected.

0 Karma

Champion

What do you mean by "it will trigger alert if table value less than 1"? Did you mean count < 1 in your search?

Motivator

table query display lot of row
if it display no rows then i need alert

0 Karma
State of Splunk Careers

Access the Splunk Careers Report to see real data that shows how Splunk mastery increases your value and job satisfaction.

Find out what your skills are worth!