All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@LowAnt- Splunk does not assign any unique ID to each event so your best bet is as following: Specify index, sourcetype Specify earliest and latest (both same time as your event time) Specify som... See more...
@LowAnt- Splunk does not assign any unique ID to each event so your best bet is as following: Specify index, sourcetype Specify earliest and latest (both same time as your event time) Specify some unique keyword to that event to avoid duplicate events if those events has exact same time. Example: index=<your_index> sourcetype=<your_sourcetype> earliest="10/5/2024:20:00:00" latest="10/5/2024:20:00:00" "this-keyword-specific-to-this-event" Now once you run this search Splunk will generate a URL, that URL you should be able to use it.   I hope this helps!!! Kindly upvote if it does!!!
Before we know it, this post is going to be able to vote
Happy 13th Birthday, posting #107735! Why, with everything going on, I missed your birthday again -- but you've changed your name! I know you're now "admin-security message-id 3439", and I'll try to r... See more...
Happy 13th Birthday, posting #107735! Why, with everything going on, I missed your birthday again -- but you've changed your name! I know you're now "admin-security message-id 3439", and I'll try to remember, but if you want to go back to being #107735, I support you there as well. You be you! Wow. 13 years. I know we said 12 would be your year to leverage technology so well established that it's gone from 'new' to 'common' to 'unmentioned' to 'forgotten' to 'new again', but it's never too late to get on-board the 'trivially easy to configure and use' train. My, but it's easy. So much of this trail has been blazed before you by so, so many others - even in your own field - that it's almost a slam-dunk. Let's get you back on your feet, #10773--uh, 3439, and get you climbing up that hill back to the mainstream. Don't worry at the bright lights of the projects whizzing by on that mainstream. Your parents will be worried sick at where we found you, and they miss you and they just want to see you succeed. This is your year, 3439. Party with proper authentication and easy authorization like it's 1999 !
Hello,  My team has a search that uses a field called regex, containing a load of different regex expressions to match with a field called string to identify key words we are looking for. Example:  ... See more...
Hello,  My team has a search that uses a field called regex, containing a load of different regex expressions to match with a field called string to identify key words we are looking for. Example:        | eval regex=split(regex, "|") | mvexpand regex | where match(string, regex)         The regex field contains 80+ different regex codes to match on certain key words. The mvexpand would cause one event to be split up into 80+ different events, just to potentially match on one field. Due to the use of this mvexpand, we encountered mvexpand's memory limitations causing events to be dropped.    I'm seeing if it is possible to match the regex within the "regex" field to the string field without having to use mvexpand to break it apart.  Previously did not work, recommended solutions such as:         | eval true = mvmap(regex, if(match(regex, query),regex,"0")) | eval true = mvfilter(true="0") | where ISNOTNULL(true)        
Try to look out for  the match below. "rolling restart finished" " "My GUID is"
See my other comment. You will need another input method. Suggest you google Azure functions "unzip" and see if they can just use Azure to do that. Otherwise you would need custom code or scripted in... See more...
See my other comment. You will need another input method. Suggest you google Azure functions "unzip" and see if they can just use Azure to do that. Otherwise you would need custom code or scripted input to pull in the zip and pass to something like the `unarchive_cmd`   unarchive_cmd = <string> * Only called if invalid_cause is set to "archive". * This field is only valid on [source::<source>] stanzas. * <string> specifies the shell command to run to extract an archived source. * Must be a shell command that takes input on stdin and produces output on stdout. * Use _auto for Splunk software's automatic handling of archive files (tar, tar.gz, tgz, tbz, tbz2, zip) * This setting applies at input time, when data is first read by Splunk software, such as on a forwarder that has configured inputs acquiring the data. * Default: empty string   Azure functions is likely the more scalable/flexible option, but if this is not a large amount of data, you might be able to hack together HF(s) to do this.  Please, accept my original comment as solution to your post and review the options I mentioned in my comment. Also be sure to check out internal azure sme channels to learn more or holler at Pro Serv. 
By the way, is there any workaround to unzip it? Will be really appreciated! 
Yeah, maybe investigate Azure Functions, pick up unzip, post to new blob, or send to HEC. Or HF and investigate a custom input to feed the `unarchive_cmd` Make sure to accept the answer to origina... See more...
Yeah, maybe investigate Azure Functions, pick up unzip, post to new blob, or send to HEC. Or HF and investigate a custom input to feed the `unarchive_cmd` Make sure to accept the answer to original post if it was helpful. Thanks!
Thanks, understood! I will have to somehow unzip it first...
Question for Omega Core Audit Will I (as app developer) get notified ?
+1 on @ITWhisperer 's question. Is this all just a huge chunk of data ingested as a single event and containing in fact multiple separate intertwined "streams" of data or are those separate events?
Hi @Real_captain , when you use a base search, you have to call it in the search tag: <search base="your_base_search"> Ciao. Giuseppe
Sorry, but this is untrue. There is no change in bucket structure between warm and cold. It's just that the bucket is moved from one storage to another. I suppose from the technical point of view th... See more...
Sorry, but this is untrue. There is no change in bucket structure between warm and cold. It's just that the bucket is moved from one storage to another. I suppose from the technical point of view the buckets could go from hot "directly" to cold but it would be a bit more complicated from the Splunk internals point of view. When the hot bucket is being rolled to warm it's indexing end and it gets renamed (which is an atomic operation) within the same storage unit. Additionally, hot and warm buckets are rolled on a different basis. So technically, a bucket could roll from hot to warm because of hot bucket lifecycle parameters (especially maxDataSize) and immediately after (on next housekeeping thread pass) get rolled to cold because of reaching maximum warm buckets count.  
At least as of the time of this comment, the docs say "No" The Azure Storage Blob modular input for Splunk Add-on for Microsoft Cloud Services does not support the ingestion of gzip files. Only ... See more...
At least as of the time of this comment, the docs say "No" The Azure Storage Blob modular input for Splunk Add-on for Microsoft Cloud Services does not support the ingestion of gzip files. Only plaintext files are supported.
Hi all, I am trying to build a query to monitor my indexer rolling restart  I would like to know how much time it has taken when it has started and when it has ended , I can only see when it has sta... See more...
Hi all, I am trying to build a query to monitor my indexer rolling restart  I would like to know how much time it has taken when it has started and when it has ended , I can only see when it has started but cannot see messages on when it has been completed    INFO CMMaster [3340464 TcpChannelThread] - Starting a rolling restart of the peers.    
HI @gcusello  After using the correct fieldForLabel , i am not able to fetch the result in the dropdown using the dynamic query:  Query :  Field POH_Group1 is fetched by the base query present on... See more...
HI @gcusello  After using the correct fieldForLabel , i am not able to fetch the result in the dropdown using the dynamic query:  Query :  Field POH_Group1 is fetched by the base query present on the top with the  <search id="base"> <input type="dropdown" token="POH_token" depends="$POH_input$" searchWhenChanged="true"> <label>POH_Group</label> <fieldForLabel>POH_Group1</fieldForLabel> <fieldForValue>POH_Group1</fieldForValue> <choice value="*">All</choice> <default>*</default> <search> <query>| dedup POH_Group1 | table POH_Group1</query> <earliest>-30d@d</earliest> <latest>now</latest> </search> </input>  
Well, it's actually _not_ a disaster recovery. It's a HA solution with some assumed level of fault tolerance. @jiaminyunThere is no such thing as "0 RPO" unless you do make some boundary conditions ... See more...
Well, it's actually _not_ a disaster recovery. It's a HA solution with some assumed level of fault tolerance. @jiaminyunThere is no such thing as "0 RPO" unless you do make some boundary conditions and prepare accordingly. A HA infrastructure like the one from SVAs can protect you in case of some disasters but will not protect you from other ones (like misconfiguration or deliberate data destroying). If you're OK with that - be my guest. Just be aware of it. RTO actually depends on your equipment, storage and resources (including personnel) you can allocate to the recovery task.
OK. Let's back up a little. You have static csv files which you uploaded to Splunk. Now you're trying to get them from Splunk with PowerBI. Why the extra step? As far as I remember you can just set u... See more...
OK. Let's back up a little. You have static csv files which you uploaded to Splunk. Now you're trying to get them from Splunk with PowerBI. Why the extra step? As far as I remember you can just set up an ODBC connection to a CSV file (in case PowerBI can't handle a raw file on its own). What's the point of dropping in Splunk as the middle man?
In Splunk's indexing process, buckets move through different stages: hot, warm, cold, and eventually frozen. The movement from hot to cold is a managed and intentional process due to the roles these ... See more...
In Splunk's indexing process, buckets move through different stages: hot, warm, cold, and eventually frozen. The movement from hot to cold is a managed and intentional process due to the roles these buckets play and their interaction with Splunk's underlying data architecture. 1. Hot Buckets: Actively Written Hot buckets are where Splunk is actively writing data. This makes them volatile because they are still receiving real-time events and may be indexed (compressed and organized) as part of ongoing ingestion. Technical Limitation: Because of their active state, they can't directly roll into cold storage, which is designed for more static, read-only data. 2. Warm Buckets: Transition to Stability Once a hot bucket reaches a certain size or the active indexing period ends, it is closed and then rolled into a warm bucket. This transition is important because warm buckets are no longer being written to, making them stable but still optimized for searching. Reason for the Warm Stage: The warm stage allows for efficient search and retrieval of data without impacting the performance of the write operations happening in hot buckets. Why Hot Can't Skip Directly to Cold Active Writing: Hot buckets are being actively written to. If they were to move directly to cold, it would require Splunk to freeze and finalize the data too early, disrupting ongoing indexing operations. Search and Performance Impact: Splunk optimizes warm buckets for active searches and allows warm data to remain in a searchable, performant state. Cold buckets, being long-term storage, are not indexed for real-time or high-performance search, making it impractical to move hot data directly into cold without this intermediary warm phase. Conclusion: The design of the bucket lifecycle (hot → warm → cold) in Splunk ensures that data remains both accessible and efficiently stored based on its usage pattern. The warm bucket stage is crucial because it marks the end of write operations while maintaining search performance before the data is pushed into more permanent, slower storage in cold buckets. Skipping this stage could cause inefficiencies and performance issues in both data ingestion and retrieval processes.
The Splunk Validated Architectures manual should help.  You may be interested in the M3/M13 or M4/M14 models.  See https://docs.splunk.com/Documentation/SVA/current/Architectures/M4M14