Splunk Search

How do I send email alert if one or more subsearch exceed 50000 results?

LearningGuy
Builder

Hello,

Hello,
How do I send email alert if  one or more subsearch exceed 50000 results?
For example below I have 4 subsearch.   if subsearch 1 and 4 exceed 50000, I would like to get an email alert stating that subsearch 1 and 4 exceed 5000.

Please suggest  Thank you so much.

| base search
[| subsearch 1]
[| subsearch 2]
[| subsearch 3]
[| subsearch 4] 


Labels (3)
0 Karma
1 Solution

yuanliu
SplunkTrust
SplunkTrust

      b)   This is the search, correct?

No.  You keep going back to "join", which everyone advises against.  Did you see that my example uses no "join"?  Also, please read sendmail syntax as @inventsekar posted.

search1
| stats count
| where count > 50000
| eval this = "search 1"
| append
    [search2
    | stats count
    | where count > 50000
    | eval this = "search 2"]
| append
    [search3
    | stats count
    | where count > 50000
    | eval this = "search 3"]
| append
    [search 4
    | stats count
    | where count > 50000
    | eval this = "search 4"]
| stats values(this) as message
| sendmail to=mybox@example.com sendresult=true message="Exceeded 50000"

If you cannot use sendmail (e.g., no MTA defined for Splunk), just get rid of the command and use alert.

Bottom line: "join" will get you nowhere.

View solution in original post

yuanliu
SplunkTrust
SplunkTrust

Let me premise this by saying that it is rarely a good practice or necessary to have this many subsearches in one place.

But without knowing the actual use case such as data involved in each, I am also not sure if I can understand what your desired results are.  If I disregard the pseudo code structure, and just interpret your words literally, all you need to do is something like

search1
| stats count
| where count > 5000
| eval this = "search 1"
| append
    [search2
    | stats count
    | where count > 5000
    | eval this = "search 2"]
| append
    [search3
    | stats count
    | where count > 5000
    | eval this = "search 3"]
| append
    [search 4
    | stats count
    | where count > 5000
    | eval this = "search 4"]
| stats values(this) as message
| eval message = if(isnull(message), null(), mvjoin(message, ", and") . " exceed 5000")

 

LearningGuy
Builder

Hi @yuanliu 
Thank you for your suggestion. The subsearch has a max 50k limit, not 5k.
If one or more subsearches hit the 50k limitation, I'd want to get an email notification indicating which subsearch exceeded the 50k limit. 
In the example  below, an email alert will be sent indicating that 2 subsearches exceed the 50k limit: search3 = 60k rows and search4 = 70k rows. 

I can create a scheduled report that sends an email every day, but I am not sure if the report has the ability to send emails only when it meets a certain condition.

search1
| join max=0 type=left ip
    [search ip="10.1.0.0/16"  
     |eval this = "search 2"]
| join max=0 type=left ip
    [search ip="10.2.0.0/16"  
     |eval this = "search 3"]
| join max=0 type=left ip
    [search ip="10.3.0.0/16"  
     |eval this = "search 4"]

 

0 Karma

yuanliu
SplunkTrust
SplunkTrust

I'm afraid I still don't understand what you are trying to do.  It makes absolutely no sense to join 50000 raw events.  In fact, it generally makes zero sense to join raw events to begin with.

It is best to describe what the end goal is with zero SPL.  I posted my four commandments of asking an answerable question many times.  Here they are again.

  • Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search that volunteers here do not have to look at.
  • Illustrate the desired output from illustrated data.
  • Explain the logic between illustrated data and desired output without SPL.
  • If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious.

In your case, simply avoid illustrating SPL.  Just illustrate what your data is,characteristics, etc., and what your result look like and why the illustrated data should lead to illustrated result, all without SPL.  There is a chance that some volunteers can understand if you do NOT show SPL.

LearningGuy
Builder

Hi @yuanliu 
First, I would like to thank you for your help.
This is "partly" related to my previous post that you solved, but I will describe it better here
https://community.splunk.com/t5/Splunk-Search/How-do-I-quot-Left-join-quot-by-appending-CSV-to-an-in...

This is just an example:
I have a host table containing IP and hostname, approximately 100k rows with unique IPs
I have a contact table containing IP and contact, approximately 1> million rows with unique IPs
Both can be accessed with DBX query,  but unfortunately they are both located in different DB connections, so it's not possible to join them at the backend.
So, the workaround is to filter out subnet on the contact DB and use subsearches to join the contact DB with the Host DB
Due to 50k rows limit using subsearch, I ran a separate query on the contact DB to find out the number of rows for each subnet, then I grouped them together to make sure the number of rows is below 50k. (Please see the diagram below) Group 1 = 40 rows, Group 2 = 45k rows,  and Group 3 = 30k rows. After that, I used left join for each group on the contact DB with the Host DB.
Since I don't control the growth of data in the Contact DB,  I am trying to figure out a way to get an email alert if one of the groups exceeded 50k limit.
I think I am able to create a scheduled report to produce the stats of each subnet in the group, but going back to my original question:
I simply want to know if it's possible for Splunk to send me an email alert only if it meets certain thresholds.
The subsearch is only one of my cases. Another case is  I have multiple reports that run daily, I intend to read the reports only if there is a problem, such as empty data, meeting certain thresholds, etc.  

LearningGuy_2-1728675352519.png

 

 


Input:
Host table

iphost
10.0.0.1host1
10.0.0.2host2
10.0.0.3host3
10.1.0.1host4
10.1.0.2host5
10.1.0.3host6
10.2.0.1host7
10.2.0.2host8
10.2.0.3host9


Contact table

ipcontact
10.0.0.1person1
10.0.0.2person2
10.0.0.3person3
10.1.0.1person4
10.1.0.2person5
10.1.0.3person6
10.2.0.1person7
10.2.0.2person8
10.2.0.3person9


Output: 
Join host and contact DB

iphostcontact
10.0.0.1host1person1
10.0.0.2host2person2
10.0.0.3host3person3
10.1.0.1host4person4
10.1.0.2host5person5
10.1.0.3host6person6
10.2.0.1host7person7
10.2.0.2host8person8
10.2.0.3host9person9




0 Karma

yuanliu
SplunkTrust
SplunkTrust
Since I don't control the growth of data in the Contact DB,  I am trying to figure out a way to get an email alert if one of the groups exceeded 50k limit.

That's exactly what my first suggestion does: Print a line if and only if one of them exceeded 50k (if you substitute 5000 with 50000).  All you need is add sendmail after that.

LearningGuy
Builder

Hi @yuanliu 
1) a)  I got this when using sendemail.  I think the reason is I am not an admin
command="sendemail", 'rootCAPath' while sending mail to:

      b)   This is the search, correct?

| search1
| join
    [search2
    | stats count
    | where count > 50000
    | eval this = "search 2"]
    | sendemail  test@testemail.com

 
2)  I found another option is to use "alerts"
I did some tests, but it didn't work.  I have total counts about 40k
Under "Trigger Conditions", I set Trigger alert when number of results is greater than 30,000.  

Please suggest.

Thanks

0 Karma

yuanliu
SplunkTrust
SplunkTrust

      b)   This is the search, correct?

No.  You keep going back to "join", which everyone advises against.  Did you see that my example uses no "join"?  Also, please read sendmail syntax as @inventsekar posted.

search1
| stats count
| where count > 50000
| eval this = "search 1"
| append
    [search2
    | stats count
    | where count > 50000
    | eval this = "search 2"]
| append
    [search3
    | stats count
    | where count > 50000
    | eval this = "search 3"]
| append
    [search 4
    | stats count
    | where count > 50000
    | eval this = "search 4"]
| stats values(this) as message
| sendmail to=mybox@example.com sendresult=true message="Exceeded 50000"

If you cannot use sendmail (e.g., no MTA defined for Splunk), just get rid of the command and use alert.

Bottom line: "join" will get you nowhere.

LearningGuy
Builder

@yuanliu 
Thank you for your help. I accepted your suggestion as solution with the following note:
- sendemail didn't work because I wasn't an admin
- Using alert worked just fine

Can you clarify what you meant "join" will get me nowhere?  😊
The result using JOIN worked just fine
My intention is to "join" the data, not to "append". When I  used APPEND, the data was appended to the original data and I had to use "stats command" to merge the data.  

Thanks

0 Karma

yuanliu
SplunkTrust
SplunkTrust

Can you clarify what you meant "join" will get me nowhere?  😊

Based on several discussions, it is apparent that you treat data in Splunk like they are in a SQL database.   join is one of the commands that is included in SPL for good reasons but often used outside of those reasons.

It is true that left join can give you similar effect as append.  However, by using join command in this manner, you misguide yourself into thinking that Splunk is actually performing a useful join when there is nothing to "join".  This thinking is quite obvious in the initial searches you illustrated.  The sooner you get out of the habit of using join command, the easier Splunk will become for you. (Join in NoSQL should generally be avoided because of cost penalties; left join in NoSQL is even more expensive.  Although that is a lesser consideration in learning to program in SPL.)

LearningGuy
Builder

@yuanliu 
Good to know.  
If I may ask again, how did you know the cost associated for each SPL command?
Thanks!!

0 Karma

inventsekar
SplunkTrust
SplunkTrust

Hi @LearningGuy 

the sendemail command reference: 

https://docs.splunk.com/Documentation/Splunk/9.3.1/SearchReference/Sendemail#Examples
updated your command with "to=" and message. thanks. 

| search1
| join
    [search2
    | stats count
    | where count > 50000
    | eval this = "search 2"]
    | sendemail to="test@testemail.com" message="50k reached"

 

thanks and best regards,
Sekar

PS - If this or any post helped you in any way, pls consider upvoting, thanks for reading !

LearningGuy
Builder

Hi @inventsekar 

I got this error when using the send email command, that's probably because I am not an admin
error: command="sendemail", 'rootCAPath' while sending mail to:


Thanks

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Ok. Several things.

1. I'm not sure why you're trying to join two sql database search results in Splunk. I get it that if you have a hammer everything looks like a nail but don't you have a better tool for it? SQL and Splunk are fairly disjoint worlds and while there is some form of interfacing between them.

2. The 50k result limit is not a general subsearch limit. The general subsearch limit is 10k results. 50k is the limits for results for the join command. Only.

3. Splunks runs a search, Splunk gets results (possibly incomplete if - as in your case - subsearch hits limits), Splunk sends results. That's it. You can try searching _internal or even trying accessing specific search job logs for some signs of anomalies but that would have to be a thing completely separate from the main search. You don't get any "metadata" about your search just within a saved search. I already wrote you that.

4. What I would probably do if I wanted to do something like this (still remember about my first point) would be to firstly get the data into Splunk either with a dbconnect input instead of searching remotely on every search or at least by summary indexing (I don't know your source data so I don't know which option would be better). Then you can simply do stats over your data in the index instead of bending over backwards to do joins over external data.

LearningGuy
Builder

Hi @PickleRick 
Thanks for your help
1.  Like I mentioned, the DB is on a different connection, if it's possible it will take a while until the DB team work on this. So, as a workaround I will need to do this at least to get the data now.

2. Yes 50k is for the join

3. Thanks. Let me look into _internal. The alerting that I am looking for is not only for a case where certain data hits a Splunk's internal threshold, but I also need it for other cases (non-Splunk's internal threshold), for example, if my scheduled report contains empty data or if data hits a certain threshold (max/min).

4.  Sorry, perhaps my explanation in the example is not clear enough because it's difficult to lay it out without a real example in SPL. Both tables (host table and contact table) in the example have been in Splunk and can be accessible via a DBX query. Like I mentioned before, the problem is that we cannot join in the DB; both are on different connections; the table host is in Connection A, and the table contact is in Connection B. 

| dbxquery connection ="connectionA"    query="select ip, host from table host"
| dbxquery connection ="connectionB"   query="select ip, contact from table contact"



I did not search remotely on every search, but instead I ran this command for each subnet to find the number of rows. For example 10.0.1.0/16 => 20k rows and so on.  

| dbxquery connection ="connectionB"   query="select ip, contact from table contact where ip::inet<'10.0.0.0/16'"
| dbxquery connection ="connectionB"   query="select ip, contact from table contact where ip::inet<'10.1.0.0/16'"
| dbxquery connection ="connectionB"   query="select ip, contact from table contact where ip::inet<'10.2.0.0/16'"
| dbxquery connection ="connectionB"   query="select ip, contact from table contact where ip::inet<'10.3.0.0/16'"


Once I figure the number of rows, then I group them until it hits right below 50k, so I am saving subsearches.  If one subnet above 50k, I will need to split them.   I hope this makes sense.  Note that this is only workaround.

join max=0 type=left ip
[| dbxquery connection ="connectionB"   query="select ip, contact from table contact where ip::inet<'10.0.0.0/16' OR ip::inet<'10.1.0.0/16'"
|eval source="group1"
]



0 Karma

PickleRick
SplunkTrust
SplunkTrust

No. Your data is not "in Splunk". You're fetching the results from the remote data source on every single search.

I would ingest the data into Splunk's index and simply do stats-based "join" on that data.

LearningGuy
Builder

Hi @PickleRick 
The data is actually also available in Splunk using an index=contact, but it's a time based combined with other data, it makes the data even larger. It is derived from the original DB, so it's better off obtain the data directly from DB.
Either way, both cases (data pulling dbxquery and index) will face the same problem  (see below)
We are aware that permanent solution is to join the data in the backend, but for now as a workaround I need to pull the data using SPL join subsearch. I only need to find a way to alert me if it exceeds 50k.

Thanks

Same problem 50k:

| search1
| join
    [search index=contact | ip="10.0.0.0/16" | eval source=search2]
| join
    [search index=contact | ip="10.1.0.0/16" | eval source=search3]
| join
    [search index=contact | ip="10.2.0.0/16" | eval source=search4]
| join
    [search index=contact | ip="10.3.0.0/16" | eval source=search5]

 

0 Karma

PickleRick
SplunkTrust
SplunkTrust

The difference is that if you have the data in Splunk you probably don't have to neither append nor join any data sets. You can probably just search for all your events and do manipulation using "native" Splunk ways (i.e. using stats instead of join).

PickleRick
SplunkTrust
SplunkTrust

The general answer is no - you have no indication in the main search whatsoever that your subsearches (regardless of whether this is a "straight" subsearch, append or join) have been finalized before full completion due to hitting limits. They are simply silently finalized and the returns yielded so far are returned and that's it.

This is why using subsearches is tricky and they're best avoided unless you can make strong assumptions about their time of execution and size of the result set.

Maybe, just maybe (haven't checked it) you could retroactively find that information in _internal but to be honest, I doubt it. The search itself doesn't return such metadata.

Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...