All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you both, I need for >50k/>10k events I am thinking of using appendcols but they are not able to join like this. Any other work around? both searches amass quite a huge number of events ... See more...
Thank you both, I need for >50k/>10k events I am thinking of using appendcols but they are not able to join like this. Any other work around? both searches amass quite a huge number of events >50k >10k and I need to search for today which would be alot.
That's a very smart way to do it! I'm going to need some time to dissect how that works, but I know off the bat that the biggest problem is going to be that the max size of the list is going to be va... See more...
That's a very smart way to do it! I'm going to need some time to dissect how that works, but I know off the bat that the biggest problem is going to be that the max size of the list is going to be variable each time this runs. Still, I think this is super clever and will keep this in mind.
My guy, that's a super smart solution! Thank you very much, I just tried this out and it works beautifully. I'm going to have to keep this kind of approach in mind as I go forward with this pro... See more...
My guy, that's a super smart solution! Thank you very much, I just tried this out and it works beautifully. I'm going to have to keep this kind of approach in mind as I go forward with this project. Very creative thinking!
If this is practical, you can do it with replace, i.e. | eval elements = mvjoin(mv_to_search, "####") | eval non_matches = replace(elements, search_value, ""), non_matches=split(non_matches, "####")... See more...
If this is practical, you can do it with replace, i.e. | eval elements = mvjoin(mv_to_search, "####") | eval non_matches = replace(elements, search_value, ""), non_matches=split(non_matches, "####"), non_matches=mvfilter(isnotnull(non_matches)) which joins the elements with a known string, gets rid of all the matches, then splits again and removes nulls.
@BG_Splunk I you don't have mvmap, you won't have foreach mode=multivalue, but you can use foreach like this | foreach 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [ | eval e=mvindex(base, <<FIELD>>), fil... See more...
@BG_Splunk I you don't have mvmap, you won't have foreach mode=multivalue, but you can use foreach like this | foreach 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [ | eval e=mvindex(base, <<FIELD>>), filtered_mv=mvappend(filtered_mv, if(!match(e, search_value), e, null())) ] | eval var_type = typeof(search_value) | table base, mv_to_search, search_value, filtered_mv, var_type where you just give incrementing numbers which are templated as <<FIELD>> so you can mvindex using that. mvfilter can't handle more than one field, so the mvindex(base, 0) won't work inside the filter expression. I'm still using 7.3 in one environment so the above works in that and I used to use that technique before mvmap came along. It does require you to know the max size of the list in advance, but it doesn't have limits I have come up against. It may also be possible to collapse the MV to a single value and then use some kind of rex/replace to get the matches out, but I've not tried that.
Thank you! Works well. I'm struggling to get the last date value to calculate the percentage deviation. Could you please help
Hello,  I just installed the Tenable WAS Add-On for Splunk in my test instance.  When combing through the data I noticed we were not able to see the State, VPR fields.  Both of these fields are need... See more...
Hello,  I just installed the Tenable WAS Add-On for Splunk in my test instance.  When combing through the data I noticed we were not able to see the State, VPR fields.  Both of these fields are needed to stay consistent with our current Vulnerability Management Program.  The state field is an absolute must to report on active and fixed vulnerabilities. Thank in advance for any assistance provided. 
I'm a bit stumped on this problem. Before I jump into the issue, there's a couple of restrictions: I'm working in an environment that is running an old version of Splunk which does not have access ... See more...
I'm a bit stumped on this problem. Before I jump into the issue, there's a couple of restrictions: I'm working in an environment that is running an old version of Splunk which does not have access to the mvmap() function. Working on getting that updated, but until then I still need to try to get a solution to this problem figured out. This operation is not the only piece of logic I'm trying to accomplish here. Assume there are other unnamed fields which are already in a specifically sorted way which we do not want to disturb.   I'm attempting to filter out all elements of a list which match the first element, leaving only the elements which are not a match. Here is an example which does work:   | makeresults | eval base = split("A75CD,A75AB,A75CD,A75BA,A75DE",",") | eval mv_to_search=mvindex(base,1,mvcount(base)-1) | eval search_value=mvindex(base,0) | eval COMMENT = "ABOVE IS ALL SETUP, BELOW IS ATTEMPTED SOLUTIONS" | eval filtered_mv=mvfilter(!match(base, "A75CD")) | eval var_type = typeof(search_value) | table base, mv_to_search, search_value, filtered_mv, var_type   However, when I attempt to switch it out for something like the following, it does not work:   | makeresults | eval base = split("A75CD,A75AB,A75CD,A75BA,A75DE",",") | eval mv_to_search=mvindex(base,1,mvcount(base)-1) | eval search_value=mvindex(base,0) | eval COMMENT = "ABOVE IS ALL SETUP, BELOW IS ATTEMPTED SOLUTIONS" | eval filtered_mv=mvfilter(!match(base, mvindex(base,0))) | eval var_type = typeof(search_value) | table base, mv_to_search, search_value, filtered_mv, var_type   I have even attempted to solve it using a foreach command, but was also unsuccessful:   | makeresults | eval base = split("A75CD,A75AB,A75CD,A75BA,A75DE",",") | eval mv_to_search=mvindex(base,1,mvcount(base)-1) | eval search_value=mvindex(base,0) | eval COMMENT = "ABOVE IS ALL SETUP, BELOW IS ATTEMPTED SOLUTIONS" | foreach mode=multivalue base [eval filtered_mv = if('<<ITEM>>'!=mvindex(base,0), mvappend(filtered_mv,'<<ITEM>>'), filtered_mv)] | eval var_type = typeof(search_value) | table base, mv_to_search, search_value, filtered_mv, var_type   I'm open to any other ideas which might accomplish this better or more efficiently. Not sure where I'm going wrong with this one, or whether this idea is even possible.
Persistent queue is not for the file monitoring. 
This looks like it would work. If you're not quite sure and you want to make sure it is correct before the data goes into the index, then you could set up a sandbox index and use crcSalt to stop the ... See more...
This looks like it would work. If you're not quite sure and you want to make sure it is correct before the data goes into the index, then you could set up a sandbox index and use crcSalt to stop the logs from being registered as indexed already. In terms of billing, you would be paying for all logs, sandboxed or not, but it would avoid the annoyance of deleting wrongly-indexed data in your production indexes. E.g. [monitor://D:\Exchange Server\TransportRoles\Logs\*\ProtocolLog\SmtpReceive] whitelist=\.log$|\.LOG$ time_before_close = 0 sourcetype=MSExchange:2019:SmtpReceive queue=parsingQueue index=sandbox disabled=false crcSalt = "testing" (then remove or modify the crcSalt when the logs look good in the sandbox and are ready for production.)
Indeed I also cannot find a direct statement in the docs about this. I would assume that SOAR falls back to the community license, but I have never seen a SOAR license expire on a machine.  You coul... See more...
Indeed I also cannot find a direct statement in the docs about this. I would assume that SOAR falls back to the community license, but I have never seen a SOAR license expire on a machine.  You could submit this question as feedback at the bottom of the docs page for the SOAR license, then they may add this information in a future version
n/a
To be precise, because it's often missed by people, the 50k limit for subsearch only applies to join command. The general limit for subsearch results in other uses is 10k by default.
Hi, We are a SaaS customer, but use private synthetic agents to run synthetic tests within our organization. We updated to PSA 24.10 for docker last week, and now we see many chrome containers with ... See more...
Hi, We are a SaaS customer, but use private synthetic agents to run synthetic tests within our organization. We updated to PSA 24.10 for docker last week, and now we see many chrome containers with 100% CPU utilization for quite some time, checking the "docker container logs" for the Chrome containers we find this error message "Timed out receiving message from renderer: 600.000." Failed trying to initialize chrome webdriver: Message: timeout: Timed out receiving message from renderer: 600.000 (Session info: chrome-headless-shell=127.0.6533.88) Any ideas of what might be causing this issue? Thanks, Roberto
Subsearches are limited to 50,000 events which could account for the missing matches - try reducing the timeframes - do the matches appear then?
Its quite random but here is one 2024-12-27 23:05:09.917, system="murex", id="645437844", sky_id="645437844", trade_id="31791027", event_id="100038914", mx_status="live", operation="nooperation", ... See more...
Its quite random but here is one 2024-12-27 23:05:09.917, system="murex", id="645437844", sky_id="645437844", trade_id="31791027", event_id="100038914", mx_status="live", operation="nooperation", action="fixing", tradebooking_sgp="2024/12/27 08:42:39.0000", eventtime_sgp="2024/12/27 08:42:33.6400", sky_to_mq_latency="-5.-360", portfolio_name="A ZZ AUD LTR", portfolio_entity="ANZBG MELB", trade_type="BasisSwap" 31791027;LIVE;17500000.00000000;AUD;IRD;IRS;;A ZZ AUD LTR;X_GCM_IRD_AUCNZ       index=sky sourcetype=sky_trade_murex_timestamp | rex field=_raw "trade_id=\"(?<trade_id>\d+)\"" | rex field=_raw "mx_status=\"(?<mx_status>[^\"]+)\"" | rex field=_raw "sky_id=\"(?<sky_id>\d+)\"" | rex field=_raw "event_id=\"(?<event_id>\d+)\"" | rex field=_raw "operation=\"(?<operation>[^\"]+)\"" | rex field=_raw "action=\"(?<action>[^\"]+)\"" | rex field=_raw "tradebooking_sgp=\"(?<tradebooking_sgp>[^\"]+)\"" | rex field=_raw "portfolio_name=\"(?<portfolio_name>[^\"]+)\"" | rex field=_raw "portfolio_entity=\"(?<portfolio_entity>[^\"]+)\"" | rex field=_raw "trade_type=\"(?<trade_type>[^\"]+)\"" | eval trade_id = replace(trade_id, "\"", "") | rename trade_id as NB | table sky_id, NB, event_id, mx_status, operation, action, tradebooking_sgp, portfolio_name, portfolio_entity, trade_type | join type=left NB [ search index=sky sourcetype=mx_to_sky | rex field=_raw "(?<NB>[^;]+);(?<TRN_STATUS>[^;]+);(?<NOMINAL>[^;]+);(?<CURRENCY>[^;]+);(?<TRN_FMLY>[^;]+);(?<TRN_GRP>[^;]+);(?<TRN_TYPE>[^;]*);(?<BPFOLIO>[^;]*);(?<SPFOLIO>[^;]*)" | table TRN_STATUS, NB, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO] | table sky_id, NB, event_id, mx_status, operation, action, tradebooking_sgp, portfolio_name, portfolio_entity, trade_type | table sky_id, NB, event_id, mx_status, operation, action, tradebooking_sgp, portfolio_name, portfolio_entity, trade_type, TRN_STATUS, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO   For that NB i get all columns and empty for TRN_STATUS, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO  
Ahhhh... The "table" command is a transforming command. So you can't use it in either search. Use "fields" instead.
If there are results for the trade_id/NB in both searches, then it is possible that the rex has not extracted the fields as you expect. Please share the two events which don't match