All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks a lot for your answer. You are right; however, I feel this doesn't scale up well when you have too many values since it will generate "Number of Web.Site unique values" * 2 columns.... which p... See more...
Thanks a lot for your answer. You are right; however, I feel this doesn't scale up well when you have too many values since it will generate "Number of Web.Site unique values" * 2 columns.... which pretty much breaks the browser. Additionally, now that I have columns with this pattern : WebSiteExample1.event_count_1week_before, ... etc. How could I write an expression like the one I had so I only filter the values where the difference between this week and the previous one is more than X percent?   I don't see an easy way to compare chunks of X minutes of data against the same time but 1 week ago. Timewrap seemed perfect for that use case but looks like it's designed only for graphical representation.  
Something is not quite right here Your regex string is missing some question marks (although they do appear to be in your error message!) Your error message says you have hit a limit with max_matc... See more...
Something is not quite right here Your regex string is missing some question marks (although they do appear to be in your error message!) Your error message says you have hit a limit with max_match, but your rex command doesn't appear to be using max_match and your sample log is a single line so even if you were using max_match there would only be one set of results Please can you clarify / expand your question
Let Splunk manage the buckets for you. https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/Automatearchiving Currently your script doesn't seem to have any filter on cold and would appear to... See more...
Let Splunk manage the buckets for you. https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/Automatearchiving Currently your script doesn't seem to have any filter on cold and would appear to copy all files so is this a one time execution as this on a cron would seem to copy over and over again creating a storage issue. Benefits of Splunk Management 1) Frozen reduces storage as the raw data in compressed format is stored, the IDX files are stripped away 2) Easy methods to reintroduce Frozen data back to thawed 3) Expands automatically if you have IDX clustering 4) Your not duplicating storage of Cold qualified time spans in a manual frozen folder 5) Folder management and creation is automatic
Hey All, Can anybody help me with optimization of this rex: | rex "#HLS#\s*IID:\s*(?P<IID>[^,]+),\s*STEP:\s*(?P<STEP>[^,]+),\s*PKEY:\s*(?P<PKEY>.*?),\s*STATE:\s*(?P<STATE>[^,]+),\s*MSG0:\s*(?P<MSG0... See more...
Hey All, Can anybody help me with optimization of this rex: | rex "#HLS#\s*IID:\s*(?P<IID>[^,]+),\s*STEP:\s*(?P<STEP>[^,]+),\s*PKEY:\s*(?P<PKEY>.*?),\s*STATE:\s*(?P<STATE>[^,]+),\s*MSG0:\s*(?P<MSG0>.*?),\s*EXCID:\s*(?P<EXCID>[a-zA-Z_]+),\s*PROPS:\s*(?P<PROPS>[^#]+)\s*#HLE#" Example log: "#HLS# IID: EB_FILE_S, STEP: SEND_TOF, PKEY: Ids:100063604006, 1000653604006, 6000125104001, 6000135104001, 6000145104001, 6000155104001, STATE: IN_PROGRESS, MSG0: Sending request to K, EXCID: dcd, PROPS: EVENT_TYPE: SEND_TO_S, asd: asd #HLE# ERROR: "Streamed search execute failed because: Error in 'rex' command: regex="#HLS#\s*IID:\s*(?P<IID>[^,]+),\s*STEP:\s*(?P<STEP>[^,]+),\s*PKEY:\s*(?P<PKEY>.*?),\s*STATE:\s*(?P<STATE>[^,]+),\s*MSG0:\s*(?P<MSG0>.*?),\s*EXCID:\s*(?P<EXCID>[a-zA-Z_]+),\s*PROPS:\s*(?P<PROPS>[^#]+)\s*#HLE#" has exceeded configured match_limit, consider raising the value in limits.conf."
thank you for your response! Yes, what I am referring to is just no longer using SPLUNK at all, not just one indexer. The question was more in terms of any scripts or data that was not removed from S... See more...
thank you for your response! Yes, what I am referring to is just no longer using SPLUNK at all, not just one indexer. The question was more in terms of any scripts or data that was not removed from SPLUNK specifically before decommissioning the server. 
Splunkbase automatically archives apps that have not been updated in 24 months.  That does not mean the app no longer works or cannot be used.  However, as noted on the cited page, this app has been ... See more...
Splunkbase automatically archives apps that have not been updated in 24 months.  That does not mean the app no longer works or cannot be used.  However, as noted on the cited page, this app has been replaced by another. The functionality of this add-on has been incorporated into the supported Splunk Add-on for Microsoft Office 365 https://splunkbase.splunk.com/app/4055
It depends on what is meant by "decommissioning Splunk".  There is a process for removing a member from a search head or indexer cluster.  Decommissioning an independent indexer typically would mean ... See more...
It depends on what is meant by "decommissioning Splunk".  There is a process for removing a member from a search head or indexer cluster.  Decommissioning an independent indexer typically would mean moving that indexer's data to another indexer. If you will not be running Splunk anywhere then just stop Splunk and retire the server.
IIRC you only have access to the first row of the results so in order to send all the data you would have to contrive to have all the data in a single event.
I am using AWS SNS to send notifications, but I am not able to find a way to send all the results that triggered the query. I see $result._raw$ option but it does not contain any data in the notifica... See more...
I am using AWS SNS to send notifications, but I am not able to find a way to send all the results that triggered the query. I see $result._raw$ option but it does not contain any data in the notification.  Can anyone please help to confirm how to send all query results to SNS? Thanks in advance.    
Hi @Rhidian , it seems correct, even if I'm not a script developer, you have only to test it. Ciao. Giuseppe
Hi @Girish , using the Free license you cannot use distributed search architecture, in other words, you can use Splunk only on a stand alone server and with some limitations (as you can read at http... See more...
Hi @Girish , using the Free license you cannot use distributed search architecture, in other words, you can use Splunk only on a stand alone server and with some limitations (as you can read at https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/MoreaboutSplunkFree ). Ciao. Giuseppe
Hi @mohsplunking , If you use the add-on builder, it gives you also the metadata file, but you don't need other folders or files. Obviously only if you're speaking of a normal input, if you are usi... See more...
Hi @mohsplunking , If you use the add-on builder, it gives you also the metadata file, but you don't need other folders or files. Obviously only if you're speaking of a normal input, if you are using a script you need also the bin folder. If you are speaking of an add-on to install on a Search Head for pasing activities, you need also props.conf, transforms.conf and if you want the CIM compliance also eventtypes.conf and tags.conf. Ciao. Giuseppe
Thanks. So somthing simple like this should work?     #!/bin/bash # coldToFrozen script for Splunk # Arguments: # $1 - Path to the cold bucket # $2 - Path to the frozen bucket COLD_BUCKET_PATH=... See more...
Thanks. So somthing simple like this should work?     #!/bin/bash # coldToFrozen script for Splunk # Arguments: # $1 - Path to the cold bucket # $2 - Path to the frozen bucket COLD_BUCKET_PATH="$1" FROZEN_BUCKET_PATH="$2" echo "Starting coldToFrozen transition..." # Log paths for debugging echo "Cold Bucket Path: $COLD_BUCKET_PATH" echo "Frozen Bucket Path: $FROZEN_BUCKET_PATH" # Ensure paths are not empty if [ -z "$COLD_BUCKET_PATH" ] || [ -z "$FROZEN_BUCKET_PATH" ]; then echo "Error: Cold or Frozen bucket path is not provided." exit 1 fi # Check if the cold bucket directory exists if [ ! -d "$COLD_BUCKET_PATH" ]; then echo "Error: Cold bucket path does not exist." exit 1 fi # Create frozen bucket directory if it does not exist if [ ! -d "$FROZEN_BUCKET_PATH" ]; then echo "Creating frozen bucket directory at: $FROZEN_BUCKET_PATH" mkdir -p "$FROZEN_BUCKET_PATH" fi # Move files prefixed with 'db_' from cold to frozen echo "Moving 'db_' files from cold to frozen..." for file in "$COLD_BUCKET_PATH"/db_*; do if [ -f "$file" ]; then mv "$file" "$FROZEN_BUCKET_PATH" if [ $? -ne 0 ]; then echo "Error: Failed to move file $file to frozen storage." exit 1 fi fi done echo "Data successfully moved to frozen storage." exit 0  
Hi @iamtheclient20 , you could try to revert the search using the same approach: | inputlookup testLookup.csv | rex field=severity "^(?<my_severity>\w+)" | rex field=location "^(?<my_location>\w+... See more...
Hi @iamtheclient20 , you could try to revert the search using the same approach: | inputlookup testLookup.csv | rex field=severity "^(?<my_severity>\w+)" | rex field=location "^(?<my_location>\w+)" | rex field=vehicle "^(?<my_vehicle>\w+)" | search [ search index=test | eval my_severity=severity, my_location=location, my_vehicle=vehicle | fields my_severity my_location my_vehicle ] | table severity location vehicle Remember that there's the limit of 50,000 results in the subsearch. let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
It is not usually good to start a search with a wildcard, so assuming aggr_id always starts with Idt:, you could do something like this  | makeresults | eval aggr_id="ldt:1234567890:09821" | search ... See more...
It is not usually good to start a search with a wildcard, so assuming aggr_id always starts with Idt:, you could do something like this  | makeresults | eval aggr_id="ldt:1234567890:09821" | search [| makeresults | eval session_ID= 1234567890 | eval aggr_id="ldt:".session_ID."*" | table aggr_id | dedup aggr_id] The makeresults just set up dummy data and should be replaced by your index search <index search> [search index | eval aggr_id="ldt:".session_ID."*" | table aggr_id | dedup aggr_id]  
Hi @deepakmr8 , if the rule in the second field is fixed, you could use a regex to extract the relevant part fo the match: <your_search> | rex field=aggr_id "^\w+:(?<extract>[^:]+)" | search sessio... See more...
Hi @deepakmr8 , if the rule in the second field is fixed, you could use a regex to extract the relevant part fo the match: <your_search> | rex field=aggr_id "^\w+:(?<extract>[^:]+)" | search session_ID=extract Ciao. Giuseppe
Hello Splunkers, I'm trying to push data to indexers from HF's where I have a syslog-ng receiving the logs. This is from a non-supported device therefore TA is not available on Splunkbase. My concer... See more...
Hello Splunkers, I'm trying to push data to indexers from HF's where I have a syslog-ng receiving the logs. This is from a non-supported device therefore TA is not available on Splunkbase. My concern is when I'm writing inputs.conf can i just create one directory and call it cisco_TA and inside that create a directory called local and place my inputs.conf there ? is that sufficient to create a custom TA and transport the logs. Or should create other direcotires such as default , metadata, licenses ect.. Please if someone can advise on the above.   Thank you,   regards, Moh.
Hello Dural, I think the issue is related to pass4SymmKey, have you ever change that key? If so, please share what files should be changed? and if you have any guideline for that, this will be much ... See more...
Hello Dural, I think the issue is related to pass4SymmKey, have you ever change that key? If so, please share what files should be changed? and if you have any guideline for that, this will be much helpful.
     
Hello Rick, I tried Live Capture, but it gave the same error, I think the issue is related to pass4SymmKey.