All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

ah yes i removed those and the search continued. But it was so laggy as there were many events, i did not get a proper search result without it hanging
Summarize and finishing this post: After installing Splunk Ent v9.3.2 the initial problems were solved. After that it appears that Splunk Secure Gateways did not connect anymore (Splunk Mobile) ... See more...
Summarize and finishing this post: After installing Splunk Ent v9.3.2 the initial problems were solved. After that it appears that Splunk Secure Gateways did not connect anymore (Splunk Mobile) Had contact with Splunk about this. After some back and forth copying of backup directories and running repair, chaning app.cong files etc, SSG became connected again. The main reason why I want to upgrade were some new features that were mentioned to be available in Dashboard Studio. Unfortunately, now the present Dashboard Studio (version 1.15.3 under v9.3.2)  has some other issues, like: eg background color shows black instead of transparent (on mobile/table), big-font size 22+ text is unreadable small in Markdown (on mobile/tablet), using title color is  neither working as I expected... Nb. besides the fact that also the new tab feature was already noted not to work in this version including next version  v9.4.0. so I understand. So here I close this post and will create a new post next year about Dashboard Studio. In the meantime waiting for an update to v9.4.X  @all, have a great new year 2025! AshleyP
I've been attempting to see if it's possible to search for a term while ignoring all minor breakers that may or may not be in it. For example, in my case, I'm trying to search for a mac address 12:EA... See more...
I've been attempting to see if it's possible to search for a term while ignoring all minor breakers that may or may not be in it. For example, in my case, I'm trying to search for a mac address 12:EA:5F:72:11:AB, but I'd also like to find all instances of 12-EA-5F-72-11-AB or 12EA.5F72.11AB or even just 12EA5F7211AB without needing to deliberately specify each of these variations? I thought I could do it using TERM(), but so far I haven't had any luck, and after reading the docs, I can see I may have misunderstood that command. Is there anyway to do this simply?
Hi @saiKiran1570 , I suppose that you're trying to download some apps from Splunkbase and you wasn't able to dowload them. ususally there are two possible issues: you're using a wrong account, th... See more...
Hi @saiKiran1570 , I suppose that you're trying to download some apps from Splunkbase and you wasn't able to dowload them. ususally there are two possible issues: you're using a wrong account, there's an issue in network connection. in the first case, Which account did you used to access Splunkbase? remember that you must use your Splunk account, not the account on the Splunk system. In the second case, check the firewall you are passing through. Ciao. Giuseppe
As has been said here many times, it is best to avoid using join - this is a classic case of why join should be avoided. Try something like this index=sky sourcetype=sky_trade_murex_timestamp OR so... See more...
As has been said here many times, it is best to avoid using join - this is a classic case of why join should be avoided. Try something like this index=sky sourcetype=sky_trade_murex_timestamp OR sourcetype=mx_to_sky ``` Parse sky_trade_murex_timestamp events (note that trade_id is put directly into the NB field) ``` | rex field=_raw "trade_id=\"(?<NB>\d+)\"" | rex field=_raw "mx_status=\"(?<mx_status>[^\"]+)\"" | rex field=_raw "sky_id=\"(?<sky_id>\d+)\"" | rex field=_raw "event_id=\"(?<event_id>\d+)\"" | rex field=_raw "operation=\"(?<operation>[^\"]+)\"" | rex field=_raw "action=\"(?<action>[^\"]+)\"" | rex field=_raw "tradebooking_sgp=\"(?<tradebooking_sgp>[^\"]+)\"" | rex field=_raw "portfolio_name=\"(?<portfolio_name>[^\"]+)\"" | rex field=_raw "portfolio_entity=\"(?<portfolio_entity>[^\"]+)\"" | rex field=_raw "trade_type=\"(?<trade_type>[^\"]+)\"" ``` Parse mx_to_sky events ``` | rex field=_raw "(?<NB>[^;]+);(?<TRN_STATUS>[^;]+);(?<NOMINAL>[^;]+);(?<CURRENCY>[^;]+);(?<TRN_FMLY>[^;]+);(?<TRN_GRP>[^;]+);(?<TRN_TYPE>[^;]*);(?<BPFOLIO>[^;]*);(?<SPFOLIO>[^;]*)" ``` Reduce to just the fields of interest ``` | table sky_id, NB, event_id, mx_status, operation, action, tradebooking_sgp, portfolio_name, portfolio_entity, trade_type, TRN_STATUS, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO ``` "Join" events by NB using stats ``` | stats values(*) as * by NB
Hello, I have installed splunk in AlmaLinux following a course and facing this error. Thanks  
Hi all, this i get about  Running4,003,400 of 4,003,400 events matched in  aday    | join type=left NB [ search index=sky sourcetype=mx_to_sky | rex field=_raw "(?<NB>[^;]+);(?<TRN_STATUS>[^;]+... See more...
Hi all, this i get about  Running4,003,400 of 4,003,400 events matched in  aday    | join type=left NB [ search index=sky sourcetype=mx_to_sky | rex field=_raw "(?<NB>[^;]+);(?<TRN_STATUS>[^;]+);(?<NOMINAL>[^;]+);(?<CURRENCY>[^;]+);(?<TRN_FMLY>[^;]+);(?<TRN_GRP>[^;]+);(?<TRN_TYPE>[^;]*);(?<BPFOLIO>[^;]*);(?<SPFOLIO>[^;]*)" | table TRN_STATUS, NB, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO] | table sky_id, NB, event_id, mx_status, operation, action, tradebooking_sgp, portfolio_name, portfolio_entity, trade_type | table sky_id, NB, event_id, mx_status, operation, action, tradebooking_sgp, portfolio_name, portfolio_entity, trade_type, TRN_STATUS, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO       index=sky sourcetype=mx_to_sky | rex field=_raw "(?<NB>[^;]+);(?<TRN_STATUS>[^;]+);(?<NOMINAL>[^;]+);(?<CURRENCY>[^;]+);(?<TRN_FMLY>[^;]+);(?<TRN_GRP>[^;]+);(?<TRN_TYPE>[^;]*);(?<BPFOLIO>[^;]*);(?<SPFOLIO>[^;]*)" | table TRN_STATUS, NB, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO   This below i get about  1,065,810 events in a day 
Thank you both, I need for >50k/>10k events I am thinking of using appendcols but they are not able to join like this. Any other work around? both searches amass quite a huge number of events ... See more...
Thank you both, I need for >50k/>10k events I am thinking of using appendcols but they are not able to join like this. Any other work around? both searches amass quite a huge number of events >50k >10k and I need to search for today which would be alot.
That's a very smart way to do it! I'm going to need some time to dissect how that works, but I know off the bat that the biggest problem is going to be that the max size of the list is going to be va... See more...
That's a very smart way to do it! I'm going to need some time to dissect how that works, but I know off the bat that the biggest problem is going to be that the max size of the list is going to be variable each time this runs. Still, I think this is super clever and will keep this in mind.
My guy, that's a super smart solution! Thank you very much, I just tried this out and it works beautifully. I'm going to have to keep this kind of approach in mind as I go forward with this pro... See more...
My guy, that's a super smart solution! Thank you very much, I just tried this out and it works beautifully. I'm going to have to keep this kind of approach in mind as I go forward with this project. Very creative thinking!
If this is practical, you can do it with replace, i.e. | eval elements = mvjoin(mv_to_search, "####") | eval non_matches = replace(elements, search_value, ""), non_matches=split(non_matches, "####")... See more...
If this is practical, you can do it with replace, i.e. | eval elements = mvjoin(mv_to_search, "####") | eval non_matches = replace(elements, search_value, ""), non_matches=split(non_matches, "####"), non_matches=mvfilter(isnotnull(non_matches)) which joins the elements with a known string, gets rid of all the matches, then splits again and removes nulls.
@BG_Splunk I you don't have mvmap, you won't have foreach mode=multivalue, but you can use foreach like this | foreach 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [ | eval e=mvindex(base, <<FIELD>>), fil... See more...
@BG_Splunk I you don't have mvmap, you won't have foreach mode=multivalue, but you can use foreach like this | foreach 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [ | eval e=mvindex(base, <<FIELD>>), filtered_mv=mvappend(filtered_mv, if(!match(e, search_value), e, null())) ] | eval var_type = typeof(search_value) | table base, mv_to_search, search_value, filtered_mv, var_type where you just give incrementing numbers which are templated as <<FIELD>> so you can mvindex using that. mvfilter can't handle more than one field, so the mvindex(base, 0) won't work inside the filter expression. I'm still using 7.3 in one environment so the above works in that and I used to use that technique before mvmap came along. It does require you to know the max size of the list in advance, but it doesn't have limits I have come up against. It may also be possible to collapse the MV to a single value and then use some kind of rex/replace to get the matches out, but I've not tried that.
Thank you! Works well. I'm struggling to get the last date value to calculate the percentage deviation. Could you please help
Hello,  I just installed the Tenable WAS Add-On for Splunk in my test instance.  When combing through the data I noticed we were not able to see the State, VPR fields.  Both of these fields are need... See more...
Hello,  I just installed the Tenable WAS Add-On for Splunk in my test instance.  When combing through the data I noticed we were not able to see the State, VPR fields.  Both of these fields are needed to stay consistent with our current Vulnerability Management Program.  The state field is an absolute must to report on active and fixed vulnerabilities. Thank in advance for any assistance provided. 
I'm a bit stumped on this problem. Before I jump into the issue, there's a couple of restrictions: I'm working in an environment that is running an old version of Splunk which does not have access ... See more...
I'm a bit stumped on this problem. Before I jump into the issue, there's a couple of restrictions: I'm working in an environment that is running an old version of Splunk which does not have access to the mvmap() function. Working on getting that updated, but until then I still need to try to get a solution to this problem figured out. This operation is not the only piece of logic I'm trying to accomplish here. Assume there are other unnamed fields which are already in a specifically sorted way which we do not want to disturb.   I'm attempting to filter out all elements of a list which match the first element, leaving only the elements which are not a match. Here is an example which does work:   | makeresults | eval base = split("A75CD,A75AB,A75CD,A75BA,A75DE",",") | eval mv_to_search=mvindex(base,1,mvcount(base)-1) | eval search_value=mvindex(base,0) | eval COMMENT = "ABOVE IS ALL SETUP, BELOW IS ATTEMPTED SOLUTIONS" | eval filtered_mv=mvfilter(!match(base, "A75CD")) | eval var_type = typeof(search_value) | table base, mv_to_search, search_value, filtered_mv, var_type   However, when I attempt to switch it out for something like the following, it does not work:   | makeresults | eval base = split("A75CD,A75AB,A75CD,A75BA,A75DE",",") | eval mv_to_search=mvindex(base,1,mvcount(base)-1) | eval search_value=mvindex(base,0) | eval COMMENT = "ABOVE IS ALL SETUP, BELOW IS ATTEMPTED SOLUTIONS" | eval filtered_mv=mvfilter(!match(base, mvindex(base,0))) | eval var_type = typeof(search_value) | table base, mv_to_search, search_value, filtered_mv, var_type   I have even attempted to solve it using a foreach command, but was also unsuccessful:   | makeresults | eval base = split("A75CD,A75AB,A75CD,A75BA,A75DE",",") | eval mv_to_search=mvindex(base,1,mvcount(base)-1) | eval search_value=mvindex(base,0) | eval COMMENT = "ABOVE IS ALL SETUP, BELOW IS ATTEMPTED SOLUTIONS" | foreach mode=multivalue base [eval filtered_mv = if('<<ITEM>>'!=mvindex(base,0), mvappend(filtered_mv,'<<ITEM>>'), filtered_mv)] | eval var_type = typeof(search_value) | table base, mv_to_search, search_value, filtered_mv, var_type   I'm open to any other ideas which might accomplish this better or more efficiently. Not sure where I'm going wrong with this one, or whether this idea is even possible.
Persistent queue is not for the file monitoring. 
This looks like it would work. If you're not quite sure and you want to make sure it is correct before the data goes into the index, then you could set up a sandbox index and use crcSalt to stop the ... See more...
This looks like it would work. If you're not quite sure and you want to make sure it is correct before the data goes into the index, then you could set up a sandbox index and use crcSalt to stop the logs from being registered as indexed already. In terms of billing, you would be paying for all logs, sandboxed or not, but it would avoid the annoyance of deleting wrongly-indexed data in your production indexes. E.g. [monitor://D:\Exchange Server\TransportRoles\Logs\*\ProtocolLog\SmtpReceive] whitelist=\.log$|\.LOG$ time_before_close = 0 sourcetype=MSExchange:2019:SmtpReceive queue=parsingQueue index=sandbox disabled=false crcSalt = "testing" (then remove or modify the crcSalt when the logs look good in the sandbox and are ready for production.)
Indeed I also cannot find a direct statement in the docs about this. I would assume that SOAR falls back to the community license, but I have never seen a SOAR license expire on a machine.  You coul... See more...
Indeed I also cannot find a direct statement in the docs about this. I would assume that SOAR falls back to the community license, but I have never seen a SOAR license expire on a machine.  You could submit this question as feedback at the bottom of the docs page for the SOAR license, then they may add this information in a future version
n/a