All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

thanks! Yep i tried that for just an hour range. I got one single row of data with everything in there it seems. I also couldnt scroll the page to confirm as page became unresponsive
Hello All,  I have setup a syslog server to collect all the network devices logs, from syslog server via UF I am forwarding this logs to Splunk platform, the network component logs from syslog serve... See more...
Hello All,  I have setup a syslog server to collect all the network devices logs, from syslog server via UF I am forwarding this logs to Splunk platform, the network component logs from syslog server to Splunk is getting 14+ hours delayed to actual logs, however on the same host system audit logs are in near-real time.  I have 50+ network components to collect syslog for security monitoring My current architecture,  All Network syslog ----> syslog server (UF installed) --> UF will forward logs to Splunk cloud Kindly suggest me a alternative suggestion to get near-real of network logs.
Hello Splunk SOAR family, hope each of you is doing good. Can anyone has some tips when it comes to installing and configuring the new version of Splunk SOAR?
If you always know what the upper max list size will be then you can put foreach numbers 0 1...999999 if you really needed as nothing will happen for those outside the actual size of the MV. If you'... See more...
If you always know what the upper max list size will be then you can put foreach numbers 0 1...999999 if you really needed as nothing will happen for those outside the actual size of the MV. If you're doing this in a dashboard, you could technically create a token with the numbered steps and use the token in the foreach, e.g. | foreach $steps_to_iterate$ where steps to iterate is calculated in a post process search of the list and simply | stats max(eval(mvcount(list))) as max_list | eval r=mvjoin(mvrange(1, max_list + 1, 1), " ") with this <done> clause in the dashboard search <done> <set token="steps_to_iterate">$result.max_list$</set> </done>  
Try replacing the table command with fields
Since they are indexed as terms split by major and minor breakers, the best you can do is search for all the "minor terms" and use regex to match the particular sequence. Unfortunately it won't work ... See more...
Since they are indexed as terms split by major and minor breakers, the best you can do is search for all the "minor terms" and use regex to match the particular sequence. Unfortunately it won't work if the original sequence was not split at all or split into larger chunks.
im gonna try this thanks!!  I think i got something like all the results into one row and performance is very bad as there are many events i did not manage to get a proper search result
ah yes i removed those and the search continued. But it was so laggy as there were many events, i did not get a proper search result without it hanging
Summarize and finishing this post: After installing Splunk Ent v9.3.2 the initial problems were solved. After that it appears that Splunk Secure Gateways did not connect anymore (Splunk Mobile) ... See more...
Summarize and finishing this post: After installing Splunk Ent v9.3.2 the initial problems were solved. After that it appears that Splunk Secure Gateways did not connect anymore (Splunk Mobile) Had contact with Splunk about this. After some back and forth copying of backup directories and running repair, chaning app.cong files etc, SSG became connected again. The main reason why I want to upgrade were some new features that were mentioned to be available in Dashboard Studio. Unfortunately, now the present Dashboard Studio (version 1.15.3 under v9.3.2)  has some other issues, like: eg background color shows black instead of transparent (on mobile/table), big-font size 22+ text is unreadable small in Markdown (on mobile/tablet), using title color is  neither working as I expected... Nb. besides the fact that also the new tab feature was already noted not to work in this version including next version  v9.4.0. so I understand. So here I close this post and will create a new post next year about Dashboard Studio. In the meantime waiting for an update to v9.4.X  @all, have a great new year 2025! AshleyP
I've been attempting to see if it's possible to search for a term while ignoring all minor breakers that may or may not be in it. For example, in my case, I'm trying to search for a mac address 12:EA... See more...
I've been attempting to see if it's possible to search for a term while ignoring all minor breakers that may or may not be in it. For example, in my case, I'm trying to search for a mac address 12:EA:5F:72:11:AB, but I'd also like to find all instances of 12-EA-5F-72-11-AB or 12EA.5F72.11AB or even just 12EA5F7211AB without needing to deliberately specify each of these variations? I thought I could do it using TERM(), but so far I haven't had any luck, and after reading the docs, I can see I may have misunderstood that command. Is there anyway to do this simply?
Hi @saiKiran1570 , I suppose that you're trying to download some apps from Splunkbase and you wasn't able to dowload them. ususally there are two possible issues: you're using a wrong account, th... See more...
Hi @saiKiran1570 , I suppose that you're trying to download some apps from Splunkbase and you wasn't able to dowload them. ususally there are two possible issues: you're using a wrong account, there's an issue in network connection. in the first case, Which account did you used to access Splunkbase? remember that you must use your Splunk account, not the account on the Splunk system. In the second case, check the firewall you are passing through. Ciao. Giuseppe
As has been said here many times, it is best to avoid using join - this is a classic case of why join should be avoided. Try something like this index=sky sourcetype=sky_trade_murex_timestamp OR so... See more...
As has been said here many times, it is best to avoid using join - this is a classic case of why join should be avoided. Try something like this index=sky sourcetype=sky_trade_murex_timestamp OR sourcetype=mx_to_sky ``` Parse sky_trade_murex_timestamp events (note that trade_id is put directly into the NB field) ``` | rex field=_raw "trade_id=\"(?<NB>\d+)\"" | rex field=_raw "mx_status=\"(?<mx_status>[^\"]+)\"" | rex field=_raw "sky_id=\"(?<sky_id>\d+)\"" | rex field=_raw "event_id=\"(?<event_id>\d+)\"" | rex field=_raw "operation=\"(?<operation>[^\"]+)\"" | rex field=_raw "action=\"(?<action>[^\"]+)\"" | rex field=_raw "tradebooking_sgp=\"(?<tradebooking_sgp>[^\"]+)\"" | rex field=_raw "portfolio_name=\"(?<portfolio_name>[^\"]+)\"" | rex field=_raw "portfolio_entity=\"(?<portfolio_entity>[^\"]+)\"" | rex field=_raw "trade_type=\"(?<trade_type>[^\"]+)\"" ``` Parse mx_to_sky events ``` | rex field=_raw "(?<NB>[^;]+);(?<TRN_STATUS>[^;]+);(?<NOMINAL>[^;]+);(?<CURRENCY>[^;]+);(?<TRN_FMLY>[^;]+);(?<TRN_GRP>[^;]+);(?<TRN_TYPE>[^;]*);(?<BPFOLIO>[^;]*);(?<SPFOLIO>[^;]*)" ``` Reduce to just the fields of interest ``` | table sky_id, NB, event_id, mx_status, operation, action, tradebooking_sgp, portfolio_name, portfolio_entity, trade_type, TRN_STATUS, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO ``` "Join" events by NB using stats ``` | stats values(*) as * by NB
Hello, I have installed splunk in AlmaLinux following a course and facing this error. Thanks  
Hi all, this i get about  Running4,003,400 of 4,003,400 events matched in  aday    | join type=left NB [ search index=sky sourcetype=mx_to_sky | rex field=_raw "(?<NB>[^;]+);(?<TRN_STATUS>[^;]+... See more...
Hi all, this i get about  Running4,003,400 of 4,003,400 events matched in  aday    | join type=left NB [ search index=sky sourcetype=mx_to_sky | rex field=_raw "(?<NB>[^;]+);(?<TRN_STATUS>[^;]+);(?<NOMINAL>[^;]+);(?<CURRENCY>[^;]+);(?<TRN_FMLY>[^;]+);(?<TRN_GRP>[^;]+);(?<TRN_TYPE>[^;]*);(?<BPFOLIO>[^;]*);(?<SPFOLIO>[^;]*)" | table TRN_STATUS, NB, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO] | table sky_id, NB, event_id, mx_status, operation, action, tradebooking_sgp, portfolio_name, portfolio_entity, trade_type | table sky_id, NB, event_id, mx_status, operation, action, tradebooking_sgp, portfolio_name, portfolio_entity, trade_type, TRN_STATUS, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO       index=sky sourcetype=mx_to_sky | rex field=_raw "(?<NB>[^;]+);(?<TRN_STATUS>[^;]+);(?<NOMINAL>[^;]+);(?<CURRENCY>[^;]+);(?<TRN_FMLY>[^;]+);(?<TRN_GRP>[^;]+);(?<TRN_TYPE>[^;]*);(?<BPFOLIO>[^;]*);(?<SPFOLIO>[^;]*)" | table TRN_STATUS, NB, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO   This below i get about  1,065,810 events in a day 
Thank you both, I need for >50k/>10k events I am thinking of using appendcols but they are not able to join like this. Any other work around? both searches amass quite a huge number of events ... See more...
Thank you both, I need for >50k/>10k events I am thinking of using appendcols but they are not able to join like this. Any other work around? both searches amass quite a huge number of events >50k >10k and I need to search for today which would be alot.
That's a very smart way to do it! I'm going to need some time to dissect how that works, but I know off the bat that the biggest problem is going to be that the max size of the list is going to be va... See more...
That's a very smart way to do it! I'm going to need some time to dissect how that works, but I know off the bat that the biggest problem is going to be that the max size of the list is going to be variable each time this runs. Still, I think this is super clever and will keep this in mind.
My guy, that's a super smart solution! Thank you very much, I just tried this out and it works beautifully. I'm going to have to keep this kind of approach in mind as I go forward with this pro... See more...
My guy, that's a super smart solution! Thank you very much, I just tried this out and it works beautifully. I'm going to have to keep this kind of approach in mind as I go forward with this project. Very creative thinking!
If this is practical, you can do it with replace, i.e. | eval elements = mvjoin(mv_to_search, "####") | eval non_matches = replace(elements, search_value, ""), non_matches=split(non_matches, "####")... See more...
If this is practical, you can do it with replace, i.e. | eval elements = mvjoin(mv_to_search, "####") | eval non_matches = replace(elements, search_value, ""), non_matches=split(non_matches, "####"), non_matches=mvfilter(isnotnull(non_matches)) which joins the elements with a known string, gets rid of all the matches, then splits again and removes nulls.
@BG_Splunk I you don't have mvmap, you won't have foreach mode=multivalue, but you can use foreach like this | foreach 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [ | eval e=mvindex(base, <<FIELD>>), fil... See more...
@BG_Splunk I you don't have mvmap, you won't have foreach mode=multivalue, but you can use foreach like this | foreach 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [ | eval e=mvindex(base, <<FIELD>>), filtered_mv=mvappend(filtered_mv, if(!match(e, search_value), e, null())) ] | eval var_type = typeof(search_value) | table base, mv_to_search, search_value, filtered_mv, var_type where you just give incrementing numbers which are templated as <<FIELD>> so you can mvindex using that. mvfilter can't handle more than one field, so the mvindex(base, 0) won't work inside the filter expression. I'm still using 7.3 in one environment so the above works in that and I used to use that technique before mvmap came along. It does require you to know the max size of the list in advance, but it doesn't have limits I have come up against. It may also be possible to collapse the MV to a single value and then use some kind of rex/replace to get the matches out, but I've not tried that.
Thank you! Works well. I'm struggling to get the last date value to calculate the percentage deviation. Could you please help