All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You can use the regex command to filter by a regular expression, but it's slower and more cumbersome than just combining TERM() functions in a search predicate. As alternatives, you can extract and ... See more...
You can use the regex command to filter by a regular expression, but it's slower and more cumbersome than just combining TERM() functions in a search predicate. As alternatives, you can extract and normalize a mac field at index time with a combination of transforms or you can create a single-field data model that acts as a secondary time series index. For the latter, create a search-time field extraction using a transform with MV_ADD = true to capture strings that look like MAC addresses matching your 48-bit patterns (xx-xx-xx-xx-xx-xx, xx:xx:xx:xx:xx:xx, and xxxx.xxxx.xxxx). For example, using source type mac_addr: # props.conf [mac_addr] REPORT-raw_mac = raw_mac # transforms.conf [raw_mac] CLEAN_KEYS = 0 MV_ADD = 1 REGEX = (?<raw_mac>(?<![-.:])\b(?:[0-9A-Fa-f]{2}(?:(?(2)(?:\2)|([-:]?))[0-9A-Fa-f]{2}){5}|[0-9A-Fa-f]{4}(?:(\.)[0-9A-Fa-f]{4}){2})\b(?!\2|\3)) Create a subsequent calculated (eval) field that removes separators: # props.conf [mac_addr] REPORT-raw_mac = raw_mac EVAL-mac = mvdedup(mvmap(raw_mac, replace(raw_mac, "[-.:]", ""))) Then, define and accelerate a data model with a single dataset and field: # datamodels.conf [my_mac_datamodel] acceleration = true # 1 month, for example acceleration.earliest_time = -1mon acceleration.hunk.dfs_block_size = 0 # data/models/my_mac_datamodel.xml { "modelName": "my_mac_datamodel", "displayName": "my_mac_datamodel", "description": "", "objectSummary": { "Event-Based": 0, "Transaction-Based": 0, "Search-Based": 1 }, "objects": [ { "objectName": "my_mac_dataset", "displayName": "my_mac_dataset", "parentName": "BaseSearch", "comment": "", "fields": [ { "fieldName": "mac", "owner": "my_mac_dataset", "type": "string", "fieldSearch": "mac=*", "required": true, "multivalue": false, "hidden": false, "editable": true, "displayName": "mac", "comment": "" } ], "calculations": [], "constraints": [], "lineage": "my_mac_dataset", "baseSearch": "index=main sourcetype=mac_addr" } ], "objectNameList": [ "my_mac_dataset" ] } All of the above can be added to a search head using SplunkWeb settings in the following order: Define shared field transformation. Define shared field extraction. Define shared calculated field. Define shared data model. Finally, use the datamodel command to optimize the search: | datamodel summariesonly=t my_mac_datamodel my_mac_dataset flat | search mac=12EA5F7211AB Note that some undocumented conditions (source type renaming?) may force Splunk to disable the optimizations used by the datamodel command when distributing the search, in which case it will be no faster than a regular search of the extracted mac field. If it's working correctly, the search log should include an optimized search with a READ_SUMMARY directive as well as various ReadSummaryDirective log entries. The datamodel command with the flat argument will return the raw events and the undecorated mac field values, but no other extractions will be performed.
Good catch. I remembered that dedup was needlessly used in both "compound searches" but didn't notice that it was transfered to the "composite search". Indeed it only leaves us with one of the two or... See more...
Good catch. I remembered that dedup was needlessly used in both "compound searches" but didn't notice that it was transfered to the "composite search". Indeed it only leaves us with one of the two or more "joinable" events.
Be aware though that it's not _searching_ for particular MAC address - it's extraction. So if you want to find a specific MAC you'll have to firstly extract it with rex _from every event_ and then co... See more...
Be aware though that it's not _searching_ for particular MAC address - it's extraction. So if you want to find a specific MAC you'll have to firstly extract it with rex _from every event_ and then compare the extracted value with what you're looking for. It's not very effective performance-wise.
The overall architecture is ok. There might be some issues with the configuration. If the delay is consistent and constant it might be a problem with timestamps. If it's being read in batches, you'r... See more...
The overall architecture is ok. There might be some issues with the configuration. If the delay is consistent and constant it might be a problem with timestamps. If it's being read in batches, you're probably ingesting from already rotated files.
Remove the dedup NB! This is reducing your events to one event per NB which is why you are only getting half your data!
That is very strange because that suggests that you have much more fields that you're creating in this search. Just for testing, replace the last stats command with | head 1000 | stats values(*) a... See more...
That is very strange because that suggests that you have much more fields that you're creating in this search. Just for testing, replace the last stats command with | head 1000 | stats values(*) as * by NB | fillnull | head 1 | transpose 0 Oh, and since you're doing stats values anyway, the dedup command is not needed. It can in fact give you a performance penalty because dedup is a centralized command while all preceeding ones are distributed streaming and stats can be distributed to some extent.
Thank you! While not the solution I was hoping for, this'll get the job done easily enough. I'd actually already considered using the rex command, but wasn't able to get my regex to look neat enough ... See more...
Thank you! While not the solution I was hoping for, this'll get the job done easily enough. I'd actually already considered using the rex command, but wasn't able to get my regex to look neat enough for me to be happy with it.
index=sky sourcetype=sky_trade_murex_timestamp OR sourcetype=mx_to_sky ``` Parse sky_trade_murex_timestamp events (note that trade_id is put directly into the NB field) ``` | rex field=_raw "trade_id... See more...
index=sky sourcetype=sky_trade_murex_timestamp OR sourcetype=mx_to_sky ``` Parse sky_trade_murex_timestamp events (note that trade_id is put directly into the NB field) ``` | rex field=_raw "trade_id=\"(?<NB>\d+)\"" | rex field=_raw "mx_status=\"(?<mx_status>[^\"]+)\"" | rex field=_raw "sky_id=\"(?<sky_id>\d+)\"" | rex field=_raw "event_id=\"(?<event_id>\d+)\"" | rex field=_raw "operation=\"(?<operation>[^\"]+)\"" | rex field=_raw "action=\"(?<action>[^\"]+)\"" | rex field=_raw "tradebooking_sgp=\"(?<tradebooking_sgp>[^\"]+)\"" | rex field=_raw "portfolio_name=\"(?<portfolio_name>[^\"]+)\"" | rex field=_raw "portfolio_entity=\"(?<portfolio_entity>[^\"]+)\"" | rex field=_raw "trade_type=\"(?<trade_type>[^\"]+)\"" ``` Parse mx_to_sky events ``` | rex field=_raw "(?<NB>[^;]+);(?<TRN_STATUS>[^;]+);(?<NOMINAL>[^;]+);(?<CURRENCY>[^;]+);(?<TRN_FMLY>[^;]+);(?<TRN_GRP>[^;]+);(?<TRN_TYPE>[^;]*);(?<BPFOLIO>[^;]*);(?<SPFOLIO>[^;]*)" ``` Reduce to just the fields of interest ``` | fields sky_id, NB, event_id, mx_status, operation, action, tradebooking_sgp, portfolio_name, portfolio_entity, trade_type, TRN_STATUS, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO ``` "Join" events by NB using stats ``` | dedup NB | stats values(*) as * by NB
You don't need to list all the variations, just specify that you don't want the hex characters | rex "(?<mac>([0-9A-F]{2}[^0-9A-F]?){5}[0-9A-F]{2})"
Please share the search you are using for these results (my mind-reading abilities have been dulled by over-indulgence!)
The desired result is to send alerts for all events > 0 and only once for alert that has 0 for the first time: Scheduled alerts: Time 0 5 10 15 20 25 30 35 40 45 50 55 0 5 10 15 ... See more...
The desired result is to send alerts for all events > 0 and only once for alert that has 0 for the first time: Scheduled alerts: Time 0 5 10 15 20 25 30 35 40 45 50 55 0 5 10 15 20 25 30 # of Events 3 4 0 0 0 8 15 2 0 5 55 66 0 0 0 0 0 8 9   Desired output, i.e. the alert that I want to receive Time 0 5 10 25 30 35 40 45 50 55 0 25 30 # of Events 3 4 0 8 15 2 0 5 55 66 0 8 9   Is there a configuration that I need to activate in alerts box? Or is there something else that I'm missing?    
Wow, it worked!! Thank you so much for your kind help. I can't believe this is so simple and yet no other forum knew what the solution is. Just a quick clarification, instead of $click.value2$ it ... See more...
Wow, it worked!! Thank you so much for your kind help. I can't believe this is so simple and yet no other forum knew what the solution is. Just a quick clarification, instead of $click.value2$ it should be $trellis.value$ since I used trellis in this panel. I am writing this for future people with the same problem. Have a great day!
OH also i dont get these in the results TRN_STATUS, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO I only see the first half of the fields command I included.
thanks! Yep i tried that for just an hour range. I got one single row of data with everything in there it seems. I also couldnt scroll the page to confirm as page became unresponsive
Hello All,  I have setup a syslog server to collect all the network devices logs, from syslog server via UF I am forwarding this logs to Splunk platform, the network component logs from syslog serve... See more...
Hello All,  I have setup a syslog server to collect all the network devices logs, from syslog server via UF I am forwarding this logs to Splunk platform, the network component logs from syslog server to Splunk is getting 14+ hours delayed to actual logs, however on the same host system audit logs are in near-real time.  I have 50+ network components to collect syslog for security monitoring My current architecture,  All Network syslog ----> syslog server (UF installed) --> UF will forward logs to Splunk cloud Kindly suggest me a alternative suggestion to get near-real of network logs.
Hello Splunk SOAR family, hope each of you is doing good. Can anyone has some tips when it comes to installing and configuring the new version of Splunk SOAR?
If you always know what the upper max list size will be then you can put foreach numbers 0 1...999999 if you really needed as nothing will happen for those outside the actual size of the MV. If you'... See more...
If you always know what the upper max list size will be then you can put foreach numbers 0 1...999999 if you really needed as nothing will happen for those outside the actual size of the MV. If you're doing this in a dashboard, you could technically create a token with the numbered steps and use the token in the foreach, e.g. | foreach $steps_to_iterate$ where steps to iterate is calculated in a post process search of the list and simply | stats max(eval(mvcount(list))) as max_list | eval r=mvjoin(mvrange(1, max_list + 1, 1), " ") with this <done> clause in the dashboard search <done> <set token="steps_to_iterate">$result.max_list$</set> </done>  
Try replacing the table command with fields
Since they are indexed as terms split by major and minor breakers, the best you can do is search for all the "minor terms" and use regex to match the particular sequence. Unfortunately it won't work ... See more...
Since they are indexed as terms split by major and minor breakers, the best you can do is search for all the "minor terms" and use regex to match the particular sequence. Unfortunately it won't work if the original sequence was not split at all or split into larger chunks.
im gonna try this thanks!!  I think i got something like all the results into one row and performance is very bad as there are many events i did not manage to get a proper search result