Reporting

How can I schedule a search to throttle repeat results but still supply any new results?

Guardian452
Explorer

I have a regular scheduled search in Splunk that is producing a large volume of repeat events. I attempted to throttle these using the once per result option, per throttling fields. I have two fields in the throttling, user and dest. This was in effort to reduce volume for repeat events, but to show any event that has changed so they are not missed.

I noticed during my testing (new throttled rule alongside old un-throttled) that one search returned two new unique results, and one result that had appeared before. As a result, Splunk did not show ANY of the results in the throttled search, even the new hits because one event was repeated. Is this how throttling is intended to function? Is there a way around this? I need the search to throttle repeat results but still supply any new results.

0 Karma

dperre_splunk
Splunk Employee
Splunk Employee

So this may work for your scenario and I will put the caveats at the end.

Step 1
Create a csv file with three columns, src_ip,dest_ip,user. The column headers need to be named exactly how they appear in the search. Create a lookup table.

http://docs.splunk.com/Documentation/Splunk/6.5.0/PivotTutorial/AddlookupfilestoSplunk

Step 2
Remove the throttling on your search.

Step 3
Change your search to the following
...your search NOT [|inputlookup yourlookupname.csv] | table src_ip, dest_ip,user, count | outputlookup yourlookupname.csv append=true

Caveats
This will show you only new results where the source ip, dest ip and user are not present in your lookup table. Hopefully this helps you. I haven't tested the above search but it should be good.

Guardian452
Explorer

Hi dperre,

I think this solution will work in this instance, we actually have another rule that functions off a lookup table. We are going to do some testing on our side to see if this works. Thanks for the help so far!

0 Karma

aaraneta_splunk
Splunk Employee
Splunk Employee

Hi @Guardian452 - Were you able to test out dperre's solution? Did it work? If yes, please don't forget to resolve this post by clicking on "Accept". If you still need more help, please provide a comment with some feedback. Thanks!

0 Karma

Guardian452
Explorer

Hi Aaraneta,

Unfortunately I ran into more pressing issues that I've had to deal with that put this on hold. I'll update once I have had a chance to test.

0 Karma

dperre_splunk
Splunk Employee
Splunk Employee

Hey Guardian,

Have a look at the below config from savedsearches.conf. This may help you, if not can you give a pseudocode of your log data or even the log data itself and detail what you expect to happen and what to not happen. It will help us answer your question.

counttype = number of events | number of hosts | number of sources | always
* Set the type of count for alerting.
* Used with relation and quantity (below).
* NOTE: If you specify "always," do not set relation or quantity (below).
* Defaults to always.

relation = greater than | less than | equal to | not equal to | drops by | rises by
* Specifies how to compare against counttype.
* Defaults to empty string.

quantity =
* Specifies a value for the counttype and relation, to determine the condition
under which an alert is triggered by a saved search.
* You can think of it as a sentence constructed like this: .
* For example, "number of events [is] greater than 10" sends an alert when the
count of events is larger than by 10.
* For example, "number of events drops by 10%" sends an alert when the count of
events drops by 10%.
* Defaults to an empty string.

alert_condition =
* Contains a conditional search that is evaluated against the results of the
saved search. Alerts are triggered if the specified search yields a
non-empty search result list.
* NOTE: If you specify an alert_condition, do not set counttype, relation, or
quantity.
* Defaults to an empty string.

0 Karma

Guardian452
Explorer

Hi dperre

Unfortunately I'm not familiar with the .conf files and how they relate to functionality in Splunk, at least not yet. I can't post screenshots unfortunately, but:

Schedule
Schedule Type = Basic
Run Every = 5 minutes
Schedule Window = 0

Alert
Condition=if number of events
is greater than=0
Alert Mode=once per result
Throttling=after triggering the alert, don't trigger is again for 4 hours per result throttling fields user,dest

What it does:

First Event - Alert Sends, Throttling Begins---->Second Event - No Alert Sent Due to Throttle
Username | Source IP | Destination IP | Count ----> Username | Source IP | Destination IP | Count -----> X
John Doe | 1.1.1.1 | 2.2.2.2 | 33 ---> Jane Doe | 1.1.1.2 | 2.2.2.2 | 54+John Doe | 1.1.1.1| 2.2.2.2 | 35 = Lost to oblivion

What I need it to do:

First Event - Throttling Begins--->Second Event--->Alert on new unique event, but remove repeat data
Username | Source IP | Destination IP | Count ----> Username | Source IP | Destination IP | Count -----> Username | Source IP | Destination IP | Count
John Doe | 1.1.1.1 | 2.2.2.2 | 33--->Jane Doe | 1.1.1.2 | 2.2.2.2 | 54+John Doe | 1.1.1.1 | 2.2.2.2 | 35=Jane Doe | 1.1.1.2 | 2.2.2.2 | 54

Also if the destination value change at all I need it to still provide results. The idea is to reduce events where there is consistent issue that is already being remediated, but catch anything new in the event the issue evolves or a new event occurs. Right now it is stripping everything it finds because one entry of data matched the previous event that was throttled Hopefully that helps.

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...