Hello Splunk experts,
I would like to simplify some complex SPL queries that search for certain events and apply tags to them according to various business rules based on both keyword searching and pattern matching. The events come from a ticketing system with many attributes, but I will simplify it thus:
|Ticket #||Ticket submit date/time||Assigned group||Details of ticket||Affected server|
I need to add a new field, called BUCKET, to calculate and store the type of ticket, based on a matrix like this:
|AAA||g1, g2||f1, f2||abc*|
|BBB||g3, g4||f3||server123, xyz*|
For example, BUCKET should be set to AAA when a ticket event arrives for which group is g1 or g2 and the notes contain keywords f1 or f2 and the server name begins with abc.
I have some some nasty queries currently which have a bunch of where/if/case operations to do the tagging currently. And for each bucket the query has to scan through the same set of ticket events. I would really like to move this business logic to a simple lookup-type CSV file so it can be easily be updated without modifications to any savedsearches or dashboards, and the tagging can be done in a single pass for all buckets by my current scheduled savedsearch which processes the raw ticket data ingested from the ticketing system via DBX.
In reality, I have a dozen different buckets, ~50 different groups, and a similar number of keywords. I only have one bucket that actually needs pattern matching on the server name, but it would be nice to support full pattern matching. Our operators ultimately have a trellis-type scorecard dashboard where they have a box for each bucket that shows the current number of tickets and is colored when the number goes above certain levels. When the operator clicks on the bucket number, the are sent to a drilldown that shows a table of the ticket details, and that drilldown has dynamic drilldown hyperlinks directly into the ticketing system.
I am imagining this tagging could be done with a fancy lookup somehow. I already have the bucket matrix in a lookup csv file. I have played with using format command to generate the appropriate nested boolean AND/OR search logic with a foreach loop but foreach doesn't seem to know now to iterate down a column in a csv.
Does my challenge seem doable? Can anyone share or point me to some example code to use multiple patterns stored in a lookup csv file?
FYI using Splunk Enterprise 8.1.9 and I DO NOT have any CLI access to either the SHC or indexers.
Thanks for any tips.
Have you tried defining your lookup like this
NOTES and SERVER would have to be listed as WILDCARD in the lookup definition.
Thanks for the suggestion. Your proposal would make the lookup simple, where I could lookup based on GROUP, NOTES, SERVER and the lookup would return the BUCKET.
But I would need to have a lookup entry for each permutation/combination of my groups/keywords/servers. I.e. for one of my real cases where there are 7 groups and 3 keywords, that would be 7x3=21 rows in the lookup file, right?
The CSV file comes from an Excel spreadsheet ultimately. I suppose I could use an Excel function or VB macro to do the combination/permutation calculation. But would it be possible to have Splunk do this instead?
Use mvexpand twice as in this example
| makeresults | fields - _time | eval group=mvrange(0,7) | eval group=mvmap(group,mvindex(split("ABCDEFG",""),group)) | eval keyword=mvrange(0,3) | eval keyword=mvmap(keyword,mvindex(split("AAA,BBB,CCC",","),keyword)) | mvexpand group | mvexpand keyword