All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi all, I need to create an alert that will be triggered when a latency threshold is breached for sustained 30 minutes. I am doing my research on how to incorporate streamstats into my query, a... See more...
Hi all, I need to create an alert that will be triggered when a latency threshold is breached for sustained 30 minutes. I am doing my research on how to incorporate streamstats into my query, and so far I have come up with this:     index="x" source="y" EndtoEnd | rex (?<e2e_p>\d+)ms \\Extracts the numerical value from the e2e_p field. | where isnotnull(e2e_p) | streamstats avg(e2e_p) window=1800 current=t time_window=30m as avg_e2e_p | where avg_e2e_p > 500     The condition doesn't happen often, but I'll work with the team that supports the app to simulate the condition once the query is finalized. I have never used streamstats before, but that's what has come up in my search for a means to incoporate a sliding window into a SPL query. Thank you in advance for taking the time to help with this.
Hi @Real_captain , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @PickleRick still it is not working.
@LowAnt- Splunk does not assign any unique ID to each event so your best bet is as following: Specify index, sourcetype Specify earliest and latest (both same time as your event time) Specify som... See more...
@LowAnt- Splunk does not assign any unique ID to each event so your best bet is as following: Specify index, sourcetype Specify earliest and latest (both same time as your event time) Specify some unique keyword to that event to avoid duplicate events if those events has exact same time. Example: index=<your_index> sourcetype=<your_sourcetype> earliest="10/5/2024:20:00:00" latest="10/5/2024:20:00:00" "this-keyword-specific-to-this-event" Now once you run this search Splunk will generate a URL, that URL you should be able to use it.   I hope this helps!!! Kindly upvote if it does!!!
Before we know it, this post is going to be able to vote
Happy 13th Birthday, posting #107735! Why, with everything going on, I missed your birthday again -- but you've changed your name! I know you're now "admin-security message-id 3439", and I'll try to r... See more...
Happy 13th Birthday, posting #107735! Why, with everything going on, I missed your birthday again -- but you've changed your name! I know you're now "admin-security message-id 3439", and I'll try to remember, but if you want to go back to being #107735, I support you there as well. You be you! Wow. 13 years. I know we said 12 would be your year to leverage technology so well established that it's gone from 'new' to 'common' to 'unmentioned' to 'forgotten' to 'new again', but it's never too late to get on-board the 'trivially easy to configure and use' train. My, but it's easy. So much of this trail has been blazed before you by so, so many others - even in your own field - that it's almost a slam-dunk. Let's get you back on your feet, #10773--uh, 3439, and get you climbing up that hill back to the mainstream. Don't worry at the bright lights of the projects whizzing by on that mainstream. Your parents will be worried sick at where we found you, and they miss you and they just want to see you succeed. This is your year, 3439. Party with proper authentication and easy authorization like it's 1999 !
Hello,  My team has a search that uses a field called regex, containing a load of different regex expressions to match with a field called string to identify key words we are looking for. Example:  ... See more...
Hello,  My team has a search that uses a field called regex, containing a load of different regex expressions to match with a field called string to identify key words we are looking for. Example:        | eval regex=split(regex, "|") | mvexpand regex | where match(string, regex)         The regex field contains 80+ different regex codes to match on certain key words. The mvexpand would cause one event to be split up into 80+ different events, just to potentially match on one field. Due to the use of this mvexpand, we encountered mvexpand's memory limitations causing events to be dropped.    I'm seeing if it is possible to match the regex within the "regex" field to the string field without having to use mvexpand to break it apart.  Previously did not work, recommended solutions such as:         | eval true = mvmap(regex, if(match(regex, query),regex,"0")) | eval true = mvfilter(true="0") | where ISNOTNULL(true)        
Try to look out for  the match below. "rolling restart finished" " "My GUID is"
See my other comment. You will need another input method. Suggest you google Azure functions "unzip" and see if they can just use Azure to do that. Otherwise you would need custom code or scripted in... See more...
See my other comment. You will need another input method. Suggest you google Azure functions "unzip" and see if they can just use Azure to do that. Otherwise you would need custom code or scripted input to pull in the zip and pass to something like the `unarchive_cmd`   unarchive_cmd = <string> * Only called if invalid_cause is set to "archive". * This field is only valid on [source::<source>] stanzas. * <string> specifies the shell command to run to extract an archived source. * Must be a shell command that takes input on stdin and produces output on stdout. * Use _auto for Splunk software's automatic handling of archive files (tar, tar.gz, tgz, tbz, tbz2, zip) * This setting applies at input time, when data is first read by Splunk software, such as on a forwarder that has configured inputs acquiring the data. * Default: empty string   Azure functions is likely the more scalable/flexible option, but if this is not a large amount of data, you might be able to hack together HF(s) to do this.  Please, accept my original comment as solution to your post and review the options I mentioned in my comment. Also be sure to check out internal azure sme channels to learn more or holler at Pro Serv. 
By the way, is there any workaround to unzip it? Will be really appreciated! 
Yeah, maybe investigate Azure Functions, pick up unzip, post to new blob, or send to HEC. Or HF and investigate a custom input to feed the `unarchive_cmd` Make sure to accept the answer to origina... See more...
Yeah, maybe investigate Azure Functions, pick up unzip, post to new blob, or send to HEC. Or HF and investigate a custom input to feed the `unarchive_cmd` Make sure to accept the answer to original post if it was helpful. Thanks!
Thanks, understood! I will have to somehow unzip it first...
Question for Omega Core Audit Will I (as app developer) get notified ?
+1 on @ITWhisperer 's question. Is this all just a huge chunk of data ingested as a single event and containing in fact multiple separate intertwined "streams" of data or are those separate events?
Hi @Real_captain , when you use a base search, you have to call it in the search tag: <search base="your_base_search"> Ciao. Giuseppe
Sorry, but this is untrue. There is no change in bucket structure between warm and cold. It's just that the bucket is moved from one storage to another. I suppose from the technical point of view th... See more...
Sorry, but this is untrue. There is no change in bucket structure between warm and cold. It's just that the bucket is moved from one storage to another. I suppose from the technical point of view the buckets could go from hot "directly" to cold but it would be a bit more complicated from the Splunk internals point of view. When the hot bucket is being rolled to warm it's indexing end and it gets renamed (which is an atomic operation) within the same storage unit. Additionally, hot and warm buckets are rolled on a different basis. So technically, a bucket could roll from hot to warm because of hot bucket lifecycle parameters (especially maxDataSize) and immediately after (on next housekeeping thread pass) get rolled to cold because of reaching maximum warm buckets count.  
At least as of the time of this comment, the docs say "No" The Azure Storage Blob modular input for Splunk Add-on for Microsoft Cloud Services does not support the ingestion of gzip files. Only ... See more...
At least as of the time of this comment, the docs say "No" The Azure Storage Blob modular input for Splunk Add-on for Microsoft Cloud Services does not support the ingestion of gzip files. Only plaintext files are supported.
Hi all, I am trying to build a query to monitor my indexer rolling restart  I would like to know how much time it has taken when it has started and when it has ended , I can only see when it has sta... See more...
Hi all, I am trying to build a query to monitor my indexer rolling restart  I would like to know how much time it has taken when it has started and when it has ended , I can only see when it has started but cannot see messages on when it has been completed    INFO CMMaster [3340464 TcpChannelThread] - Starting a rolling restart of the peers.    
HI @gcusello  After using the correct fieldForLabel , i am not able to fetch the result in the dropdown using the dynamic query:  Query :  Field POH_Group1 is fetched by the base query present on... See more...
HI @gcusello  After using the correct fieldForLabel , i am not able to fetch the result in the dropdown using the dynamic query:  Query :  Field POH_Group1 is fetched by the base query present on the top with the  <search id="base"> <input type="dropdown" token="POH_token" depends="$POH_input$" searchWhenChanged="true"> <label>POH_Group</label> <fieldForLabel>POH_Group1</fieldForLabel> <fieldForValue>POH_Group1</fieldForValue> <choice value="*">All</choice> <default>*</default> <search> <query>| dedup POH_Group1 | table POH_Group1</query> <earliest>-30d@d</earliest> <latest>now</latest> </search> </input>  
Well, it's actually _not_ a disaster recovery. It's a HA solution with some assumed level of fault tolerance. @jiaminyunThere is no such thing as "0 RPO" unless you do make some boundary conditions ... See more...
Well, it's actually _not_ a disaster recovery. It's a HA solution with some assumed level of fault tolerance. @jiaminyunThere is no such thing as "0 RPO" unless you do make some boundary conditions and prepare accordingly. A HA infrastructure like the one from SVAs can protect you in case of some disasters but will not protect you from other ones (like misconfiguration or deliberate data destroying). If you're OK with that - be my guest. Just be aware of it. RTO actually depends on your equipment, storage and resources (including personnel) you can allocate to the recovery task.