All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks For the reply, But I want whenever I make changes on the script and restart splunk it should show on the dashboard . Rather than everytime cleaning the cache to see the changes. Is it possible... See more...
Thanks For the reply, But I want whenever I make changes on the script and restart splunk it should show on the dashboard . Rather than everytime cleaning the cache to see the changes. Is it possible to do that?
Hello @Ryan.Paredez  It seems that the solution I posted does not apply to most cases. I faced the same issue twice and the solution was to allow some policies on F5 loadbalancer related to CORS er... See more...
Hello @Ryan.Paredez  It seems that the solution I posted does not apply to most cases. I faced the same issue twice and the solution was to allow some policies on F5 loadbalancer related to CORS error. It worked on one case, but I now have about 3 cases one of them F5 could not resolve it until now I will update the post once I resolve it. Regards, Khalid
This : <condition match="$row.Services$ == &quot;s3-bucket&quot;"> works fine
hello all, I see that SOAR sends a mail every time  a Container re-assigned takes place. I wish to disable SOAR from sending that email, but under Administration -> Email Settings I only manage to ... See more...
hello all, I see that SOAR sends a mail every time  a Container re-assigned takes place. I wish to disable SOAR from sending that email, but under Administration -> Email Settings I only manage to change the template of the email.  is there a way to stop it?   thank you in advance  
Hi @lukasmecir , I'm not sure that copying a part of buckets, an index will continue to correctly run: in theory it should do, but I'd prefer the approach I described. Also because, in this way, yo... See more...
Hi @lukasmecir , I'm not sure that copying a part of buckets, an index will continue to correctly run: in theory it should do, but I'd prefer the approach I described. Also because, in this way, you separate data to have more or less the same amunt of data in both the Indexers, and then the new data will be distributed between them. You could try, waiting to delete copied buckets from the first Indexer after a test completion. Ciao. Giuseppe
I don't think you can do this at ingest time (but happy to be proved wrong!), but you can parse and split out the elements of the data collection into separate events using spath and mvexpand | spat... See more...
I don't think you can do this at ingest time (but happy to be proved wrong!), but you can parse and split out the elements of the data collection into separate events using spath and mvexpand | spath data{} output=data | mvexpand data | table data
Hi, thank you for reply. I understand what you mean, but my intended goal is not only to have the same amount of data on both instances, but have the same amount of data on both instances per index ... See more...
Hi, thank you for reply. I understand what you mean, but my intended goal is not only to have the same amount of data on both instances, but have the same amount of data on both instances per index - to share load between instances when data in one index will be searched. I see I did not mention this in original post, I am sorry for that, my fault.
Apart from what @gcusello says, rex will only extract contiguous characters into a field, so what you are asking for is not possible in a single rex command.
Hi @lukasmecir , don't copy part of buckets, but divide your indexes between the two Indexers to have more or less the same amount od data, moving the entire Indexes, not part of them. Remeber to a... See more...
Hi @lukasmecir , don't copy part of buckets, but divide your indexes between the two Indexers to have more or less the same amount od data, moving the entire Indexes, not part of them. Remeber to add the same indexes.conf to the New Indexer. Ciao. Giuseppe
For good reasons, that is the way browsers work!
Hello everyone,  need your support to parse below sample json, i want is  1. Only the fields from "activity_type" till "user_email" 2. Remove first lines before "activity_type"and last lines after... See more...
Hello everyone,  need your support to parse below sample json, i want is  1. Only the fields from "activity_type" till "user_email" 2. Remove first lines before "activity_type"and last lines after "user_email" 3. Line should break at "activity_type" 4. TIME_PREFIX=event_time   i added below but doesn't work removing the lines and TIME_PREFIX  [ sample_json ] BREAK_ONLY_BEFORE=\"activity_type":\s.+, CHARSET=UTF-8 SHOULD_LINEMERGE=true disabled=false LINE_BREAKER=([\r\n]+) TIME_PREFIX=event_time SEDCMD-remove=s/^\{/g   Sample data: { "status": 0, "message": "Request completed successfully", "data": [ { "activity_type": "login", "associated_items": null, "changed_values": null, "event_time": 1733907370512, "id": "XcDutJMBNXQ_Xwfn2wgV", "ip_address": "x.x.x.x", "is_impersonated_user": false, "item": { "user_email": "xyz@example.com" }, "message": "User xyz@example.com logged in", "object_id": 0, "object_name": "", "object_type": "session", "source": "", "user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.1.1 Safari/605.1.15", "user_email": "xyz@example.com" }, { "activity_type": "export", "associated_items": null, "changed_values": null, "event_time": 1732634960475, "id": "bd0XaZMBH5U9RA7biWrq", "ip_address": "", "is_impersonated_user": false, "item": null, "message": "Incident Detail Report generated successfully", "object_id": 0, "object_name": "", "object_type": "breach incident", "source": "", "user_agent": "", "user_email": "" }, { "activity_type": "logout", "associated_items": null, "changed_values": null, "event_time": 1732625563087, "id": "jaGHaJMB-qVJqBPy_3IG", "ip_address": "87.200.106.98", "is_impersonated_user": false, "item": { "user_email": "xyz@example.com" }, "message": "User xyz@example.com logged out", "object_id": 0, "object_name": "", "object_type": "session", "source": "", "user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36", "user_email": "xyz@example.com" } ], "count": 33830, "meta_info": { "total_rows": 33830, "row_count": 200, "pagination": { "pagination_id": "" } } }
Hello, I have all-in-one Splunk instance with data already indexed. Now I want to add new Indexer (not-clustered, clean installation). I would like to move part of indexed data to new Indexer (to ha... See more...
Hello, I have all-in-one Splunk instance with data already indexed. Now I want to add new Indexer (not-clustered, clean installation). I would like to move part of indexed data to new Indexer (to have cca the same amount of data on both instances). My idea of process is: Stop all-in-one instance  Create new index(es) on new indexer Stop new indexer Copy (what is best - rsync?) part of buckets in given index(es) from all-in-one instance to new indexer Start new indexer and all-in-one instance Configure outputs.conf on forwarders - add new indexer Add new indexer as search peer to all-in-one instance Would it work or I missed something? Thank you for help. Best regards Lukas Mecir
Hi @SN1 , if this is a field (your_field), the easiest way it to use the eval functions, not a regex: | eval date=strftime(strptime(your_field,"%m/%d/%Y"),"%m/%Y") Ciao. Giuseppe
 hi , I want to extract from  this date 12/11/2024 result should be 12/2024
Hi I have a Dashboard in which i am using javascript , so whenever I am making changes on the script and restarting the splunk,  I am not able to see those changes , The only way I am able to see tho... See more...
Hi I have a Dashboard in which i am using javascript , so whenever I am making changes on the script and restarting the splunk,  I am not able to see those changes , The only way I am able to see those changes when I am clearing my cache of my browser.
While I wholeheartedly agree about the "don't fiddle with structured data using regexes" point, it's worth noting that spath is not feasible for search-time extractions on which you'd want to base yo... See more...
While I wholeheartedly agree about the "don't fiddle with structured data using regexes" point, it's worth noting that spath is not feasible for search-time extractions on which you'd want to base your searches because spath has to parse whole event (or a whole given field) as json event and has no notion about fields before that so you don't have any condition like "spath(whatever)=some_value". In other words, while for "first-order" jsons you can do the normal initial search filtering based on field=value conditions, it won't work with more deeply embedded json structures (regardless of whether they are included as strings within an "outer" json or if they are simply a part of a syslog-headered event). Splunk still has to process all events from the preceeding pipeline, push them through spath and only then you can filter the data further. One possible way around it is to limit your processed data by limiting your data in the initial search by searching for the literal value term. It will not help much with fields of low cardinality and terms common across many fields (like in this case - true/false is not a very well-limiting search term) but in other cases when you're searching for a fairly unique term it can mean loads of speedup.  
So you want to match any "string" in any event with any other event and count the number of matches? Apart from this being extremely vague, what is it that you are attempting to determine? What are t... See more...
So you want to match any "string" in any event with any other event and count the number of matches? Apart from this being extremely vague, what is it that you are attempting to determine? What are the boundary conditions for determining which strings to try and match? What do you want to do if an event has more than one "string" which matches other strings in other events, do you double count the events?
Hi Splunkers, Per this documentation - https://docs.splunk.com/Documentation/Splunk/latest/DashStudio/tokens - setting default value is done by navigating to the Interactions section of the Configur... See more...
Hi Splunkers, Per this documentation - https://docs.splunk.com/Documentation/Splunk/latest/DashStudio/tokens - setting default value is done by navigating to the Interactions section of the Configuration panel. This is simple with the given example with the token set as $method$. "tokens": { "default": { "method": { "value": "GET" } } }   Would anyone be able to advise as to how can I set default tokens of a dashboard (created using Dashboard Studio) if the value is of the panel is pointing to a data source whose query has a dependency to another data source's results? Panel A: Data Source: 'Alpha status' 'Alpha status' query: | eval status=$Beta status:result._statusNumber$     e.g. I need to set a default token value for $Beta status:result._statusNumber$ Thanks in advance for the response.
@ITWhisperer Cool, your proposal does exactly what I was looking for. Thank you. 
Hi @anooshac , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated