All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello,  I just installed the Tenable WAS Add-On for Splunk in my test instance.  When combing through the data I noticed we were not able to see the State, VPR fields.  Both of these fields are need... See more...
Hello,  I just installed the Tenable WAS Add-On for Splunk in my test instance.  When combing through the data I noticed we were not able to see the State, VPR fields.  Both of these fields are needed to stay consistent with our current Vulnerability Management Program.  The state field is an absolute must to report on active and fixed vulnerabilities. Thank in advance for any assistance provided. 
I'm a bit stumped on this problem. Before I jump into the issue, there's a couple of restrictions: I'm working in an environment that is running an old version of Splunk which does not have access ... See more...
I'm a bit stumped on this problem. Before I jump into the issue, there's a couple of restrictions: I'm working in an environment that is running an old version of Splunk which does not have access to the mvmap() function. Working on getting that updated, but until then I still need to try to get a solution to this problem figured out. This operation is not the only piece of logic I'm trying to accomplish here. Assume there are other unnamed fields which are already in a specifically sorted way which we do not want to disturb.   I'm attempting to filter out all elements of a list which match the first element, leaving only the elements which are not a match. Here is an example which does work:   | makeresults | eval base = split("A75CD,A75AB,A75CD,A75BA,A75DE",",") | eval mv_to_search=mvindex(base,1,mvcount(base)-1) | eval search_value=mvindex(base,0) | eval COMMENT = "ABOVE IS ALL SETUP, BELOW IS ATTEMPTED SOLUTIONS" | eval filtered_mv=mvfilter(!match(base, "A75CD")) | eval var_type = typeof(search_value) | table base, mv_to_search, search_value, filtered_mv, var_type   However, when I attempt to switch it out for something like the following, it does not work:   | makeresults | eval base = split("A75CD,A75AB,A75CD,A75BA,A75DE",",") | eval mv_to_search=mvindex(base,1,mvcount(base)-1) | eval search_value=mvindex(base,0) | eval COMMENT = "ABOVE IS ALL SETUP, BELOW IS ATTEMPTED SOLUTIONS" | eval filtered_mv=mvfilter(!match(base, mvindex(base,0))) | eval var_type = typeof(search_value) | table base, mv_to_search, search_value, filtered_mv, var_type   I have even attempted to solve it using a foreach command, but was also unsuccessful:   | makeresults | eval base = split("A75CD,A75AB,A75CD,A75BA,A75DE",",") | eval mv_to_search=mvindex(base,1,mvcount(base)-1) | eval search_value=mvindex(base,0) | eval COMMENT = "ABOVE IS ALL SETUP, BELOW IS ATTEMPTED SOLUTIONS" | foreach mode=multivalue base [eval filtered_mv = if('<<ITEM>>'!=mvindex(base,0), mvappend(filtered_mv,'<<ITEM>>'), filtered_mv)] | eval var_type = typeof(search_value) | table base, mv_to_search, search_value, filtered_mv, var_type   I'm open to any other ideas which might accomplish this better or more efficiently. Not sure where I'm going wrong with this one, or whether this idea is even possible.
Persistent queue is not for the file monitoring. 
This looks like it would work. If you're not quite sure and you want to make sure it is correct before the data goes into the index, then you could set up a sandbox index and use crcSalt to stop the ... See more...
This looks like it would work. If you're not quite sure and you want to make sure it is correct before the data goes into the index, then you could set up a sandbox index and use crcSalt to stop the logs from being registered as indexed already. In terms of billing, you would be paying for all logs, sandboxed or not, but it would avoid the annoyance of deleting wrongly-indexed data in your production indexes. E.g. [monitor://D:\Exchange Server\TransportRoles\Logs\*\ProtocolLog\SmtpReceive] whitelist=\.log$|\.LOG$ time_before_close = 0 sourcetype=MSExchange:2019:SmtpReceive queue=parsingQueue index=sandbox disabled=false crcSalt = "testing" (then remove or modify the crcSalt when the logs look good in the sandbox and are ready for production.)
Indeed I also cannot find a direct statement in the docs about this. I would assume that SOAR falls back to the community license, but I have never seen a SOAR license expire on a machine.  You coul... See more...
Indeed I also cannot find a direct statement in the docs about this. I would assume that SOAR falls back to the community license, but I have never seen a SOAR license expire on a machine.  You could submit this question as feedback at the bottom of the docs page for the SOAR license, then they may add this information in a future version
n/a
To be precise, because it's often missed by people, the 50k limit for subsearch only applies to join command. The general limit for subsearch results in other uses is 10k by default.
Hi, We are a SaaS customer, but use private synthetic agents to run synthetic tests within our organization. We updated to PSA 24.10 for docker last week, and now we see many chrome containers with ... See more...
Hi, We are a SaaS customer, but use private synthetic agents to run synthetic tests within our organization. We updated to PSA 24.10 for docker last week, and now we see many chrome containers with 100% CPU utilization for quite some time, checking the "docker container logs" for the Chrome containers we find this error message "Timed out receiving message from renderer: 600.000." Failed trying to initialize chrome webdriver: Message: timeout: Timed out receiving message from renderer: 600.000 (Session info: chrome-headless-shell=127.0.6533.88) Any ideas of what might be causing this issue? Thanks, Roberto
Subsearches are limited to 50,000 events which could account for the missing matches - try reducing the timeframes - do the matches appear then?
Its quite random but here is one 2024-12-27 23:05:09.917, system="murex", id="645437844", sky_id="645437844", trade_id="31791027", event_id="100038914", mx_status="live", operation="nooperation", ... See more...
Its quite random but here is one 2024-12-27 23:05:09.917, system="murex", id="645437844", sky_id="645437844", trade_id="31791027", event_id="100038914", mx_status="live", operation="nooperation", action="fixing", tradebooking_sgp="2024/12/27 08:42:39.0000", eventtime_sgp="2024/12/27 08:42:33.6400", sky_to_mq_latency="-5.-360", portfolio_name="A ZZ AUD LTR", portfolio_entity="ANZBG MELB", trade_type="BasisSwap" 31791027;LIVE;17500000.00000000;AUD;IRD;IRS;;A ZZ AUD LTR;X_GCM_IRD_AUCNZ       index=sky sourcetype=sky_trade_murex_timestamp | rex field=_raw "trade_id=\"(?<trade_id>\d+)\"" | rex field=_raw "mx_status=\"(?<mx_status>[^\"]+)\"" | rex field=_raw "sky_id=\"(?<sky_id>\d+)\"" | rex field=_raw "event_id=\"(?<event_id>\d+)\"" | rex field=_raw "operation=\"(?<operation>[^\"]+)\"" | rex field=_raw "action=\"(?<action>[^\"]+)\"" | rex field=_raw "tradebooking_sgp=\"(?<tradebooking_sgp>[^\"]+)\"" | rex field=_raw "portfolio_name=\"(?<portfolio_name>[^\"]+)\"" | rex field=_raw "portfolio_entity=\"(?<portfolio_entity>[^\"]+)\"" | rex field=_raw "trade_type=\"(?<trade_type>[^\"]+)\"" | eval trade_id = replace(trade_id, "\"", "") | rename trade_id as NB | table sky_id, NB, event_id, mx_status, operation, action, tradebooking_sgp, portfolio_name, portfolio_entity, trade_type | join type=left NB [ search index=sky sourcetype=mx_to_sky | rex field=_raw "(?<NB>[^;]+);(?<TRN_STATUS>[^;]+);(?<NOMINAL>[^;]+);(?<CURRENCY>[^;]+);(?<TRN_FMLY>[^;]+);(?<TRN_GRP>[^;]+);(?<TRN_TYPE>[^;]*);(?<BPFOLIO>[^;]*);(?<SPFOLIO>[^;]*)" | table TRN_STATUS, NB, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO] | table sky_id, NB, event_id, mx_status, operation, action, tradebooking_sgp, portfolio_name, portfolio_entity, trade_type | table sky_id, NB, event_id, mx_status, operation, action, tradebooking_sgp, portfolio_name, portfolio_entity, trade_type, TRN_STATUS, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO   For that NB i get all columns and empty for TRN_STATUS, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO  
Ahhhh... The "table" command is a transforming command. So you can't use it in either search. Use "fields" instead.
If there are results for the trade_id/NB in both searches, then it is possible that the rex has not extracted the fields as you expect. Please share the two events which don't match
Thank you for your quick responses. I have increased the frozen time period from my indexer machine. And I am able to increase it according to my requirement.
Thanks!! I get this error though  Error in 'multisearch' command: Multisearch subsearches might only contain purely streaming operations (subsearch 1 contains a non-streaming command).  
It doesn't seem to include any "start" or "end" time. It's just one timestamp. So you must think of a proper logic behind your request.
No problem. But why is it that some rows are populated and some are not then? i.e some match and some don't match. I renamed trade_id as NB then left=join NB so it should join 2 of them together but ... See more...
No problem. But why is it that some rows are populated and some are not then? i.e some match and some don't match. I renamed trade_id as NB then left=join NB so it should join 2 of them together but why does it not work for some rows or columns although clearly it matches in both searches?
Assuming both your joined searches produce proper results (it's up to you to check it - we don't know), the easiest and most straightforward way to avoid join altogether is to use multisearch to run ... See more...
Assuming both your joined searches produce proper results (it's up to you to check it - we don't know), the easiest and most straightforward way to avoid join altogether is to use multisearch to run those in parallel and then stats the results. This way you're not prone to hit join's limits and since your searches are streaming ones you can use multisearch and you're not limited by subsearch limits which you might hit when using append.   | multisearch [ search index=sky sourcetype=sky_trade_murex_timestamp | rex field=_raw "trade_id=\"(?<trade_id>\d+)\"" | rex field=_raw "mx_status=\"(?<mx_status>[^\"]+)\"" | rex field=_raw "sky_id=\"(?<sky_id>\d+)\"" | rex field=_raw "event_id=\"(?<event_id>\d+)\"" | rex field=_raw "operation=\"(?<operation>[^\"]+)\"" | rex field=_raw "action=\"(?<action>[^\"]+)\"" | rex field=_raw "tradebooking_sgp=\"(?<tradebooking_sgp>[^\"]+)\"" | rex field=_raw "portfolio_name=\"(?<portfolio_name>[^\"]+)\"" | rex field=_raw "portfolio_entity=\"(?<portfolio_entity>[^\"]+)\"" | rex field=_raw "trade_type=\"(?<trade_type>[^\"]+)\"" | rename trade_id as NB | dedup NB | eval NB = tostring(trim(NB)) | table sky_id, NB, event_id, mx_status, operation, action, tradebooking_sgp, portfolio_name, portfolio_entity, trade_type ] [ search index=sky sourcetype=mx_to_sky | rex field=_raw "(?<NB>\d+);(?<TRN_STATUS>[^;]+);(?<NOMINAL>[^;]+);(?<CURRENCY>[^;]+);(?<TRN_FMLY>[^;]+);(?<TRN_GRP>[^;]+);(?<TRN_TYPE>[^;]*);(?<BPFOLIO>[^;]*);(?<SPFOLIO>[^;]*)" | eval NB = tostring(trim(NB)) | table TRN_STATUS, NB, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO ] | stats values(*) as * by NB    
Positive lookahead doesn't perform well in Splunk and, generally, is unnecessary.