All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It depends on what is meant by "decommissioning Splunk".  There is a process for removing a member from a search head or indexer cluster.  Decommissioning an independent indexer typically would mean ... See more...
It depends on what is meant by "decommissioning Splunk".  There is a process for removing a member from a search head or indexer cluster.  Decommissioning an independent indexer typically would mean moving that indexer's data to another indexer. If you will not be running Splunk anywhere then just stop Splunk and retire the server.
IIRC you only have access to the first row of the results so in order to send all the data you would have to contrive to have all the data in a single event.
I am using AWS SNS to send notifications, but I am not able to find a way to send all the results that triggered the query. I see $result._raw$ option but it does not contain any data in the notifica... See more...
I am using AWS SNS to send notifications, but I am not able to find a way to send all the results that triggered the query. I see $result._raw$ option but it does not contain any data in the notification.  Can anyone please help to confirm how to send all query results to SNS? Thanks in advance.    
Hi @Rhidian , it seems correct, even if I'm not a script developer, you have only to test it. Ciao. Giuseppe
Hi @Girish , using the Free license you cannot use distributed search architecture, in other words, you can use Splunk only on a stand alone server and with some limitations (as you can read at http... See more...
Hi @Girish , using the Free license you cannot use distributed search architecture, in other words, you can use Splunk only on a stand alone server and with some limitations (as you can read at https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/MoreaboutSplunkFree ). Ciao. Giuseppe
Hi @mohsplunking , If you use the add-on builder, it gives you also the metadata file, but you don't need other folders or files. Obviously only if you're speaking of a normal input, if you are usi... See more...
Hi @mohsplunking , If you use the add-on builder, it gives you also the metadata file, but you don't need other folders or files. Obviously only if you're speaking of a normal input, if you are using a script you need also the bin folder. If you are speaking of an add-on to install on a Search Head for pasing activities, you need also props.conf, transforms.conf and if you want the CIM compliance also eventtypes.conf and tags.conf. Ciao. Giuseppe
Thanks. So somthing simple like this should work?     #!/bin/bash # coldToFrozen script for Splunk # Arguments: # $1 - Path to the cold bucket # $2 - Path to the frozen bucket COLD_BUCKET_PATH=... See more...
Thanks. So somthing simple like this should work?     #!/bin/bash # coldToFrozen script for Splunk # Arguments: # $1 - Path to the cold bucket # $2 - Path to the frozen bucket COLD_BUCKET_PATH="$1" FROZEN_BUCKET_PATH="$2" echo "Starting coldToFrozen transition..." # Log paths for debugging echo "Cold Bucket Path: $COLD_BUCKET_PATH" echo "Frozen Bucket Path: $FROZEN_BUCKET_PATH" # Ensure paths are not empty if [ -z "$COLD_BUCKET_PATH" ] || [ -z "$FROZEN_BUCKET_PATH" ]; then echo "Error: Cold or Frozen bucket path is not provided." exit 1 fi # Check if the cold bucket directory exists if [ ! -d "$COLD_BUCKET_PATH" ]; then echo "Error: Cold bucket path does not exist." exit 1 fi # Create frozen bucket directory if it does not exist if [ ! -d "$FROZEN_BUCKET_PATH" ]; then echo "Creating frozen bucket directory at: $FROZEN_BUCKET_PATH" mkdir -p "$FROZEN_BUCKET_PATH" fi # Move files prefixed with 'db_' from cold to frozen echo "Moving 'db_' files from cold to frozen..." for file in "$COLD_BUCKET_PATH"/db_*; do if [ -f "$file" ]; then mv "$file" "$FROZEN_BUCKET_PATH" if [ $? -ne 0 ]; then echo "Error: Failed to move file $file to frozen storage." exit 1 fi fi done echo "Data successfully moved to frozen storage." exit 0  
Hi @iamtheclient20 , you could try to revert the search using the same approach: | inputlookup testLookup.csv | rex field=severity "^(?<my_severity>\w+)" | rex field=location "^(?<my_location>\w+... See more...
Hi @iamtheclient20 , you could try to revert the search using the same approach: | inputlookup testLookup.csv | rex field=severity "^(?<my_severity>\w+)" | rex field=location "^(?<my_location>\w+)" | rex field=vehicle "^(?<my_vehicle>\w+)" | search [ search index=test | eval my_severity=severity, my_location=location, my_vehicle=vehicle | fields my_severity my_location my_vehicle ] | table severity location vehicle Remember that there's the limit of 50,000 results in the subsearch. let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
It is not usually good to start a search with a wildcard, so assuming aggr_id always starts with Idt:, you could do something like this  | makeresults | eval aggr_id="ldt:1234567890:09821" | search ... See more...
It is not usually good to start a search with a wildcard, so assuming aggr_id always starts with Idt:, you could do something like this  | makeresults | eval aggr_id="ldt:1234567890:09821" | search [| makeresults | eval session_ID= 1234567890 | eval aggr_id="ldt:".session_ID."*" | table aggr_id | dedup aggr_id] The makeresults just set up dummy data and should be replaced by your index search <index search> [search index | eval aggr_id="ldt:".session_ID."*" | table aggr_id | dedup aggr_id]  
Hi @deepakmr8 , if the rule in the second field is fixed, you could use a regex to extract the relevant part fo the match: <your_search> | rex field=aggr_id "^\w+:(?<extract>[^:]+)" | search sessio... See more...
Hi @deepakmr8 , if the rule in the second field is fixed, you could use a regex to extract the relevant part fo the match: <your_search> | rex field=aggr_id "^\w+:(?<extract>[^:]+)" | search session_ID=extract Ciao. Giuseppe
Hello Splunkers, I'm trying to push data to indexers from HF's where I have a syslog-ng receiving the logs. This is from a non-supported device therefore TA is not available on Splunkbase. My concer... See more...
Hello Splunkers, I'm trying to push data to indexers from HF's where I have a syslog-ng receiving the logs. This is from a non-supported device therefore TA is not available on Splunkbase. My concern is when I'm writing inputs.conf can i just create one directory and call it cisco_TA and inside that create a directory called local and place my inputs.conf there ? is that sufficient to create a custom TA and transport the logs. Or should create other direcotires such as default , metadata, licenses ect.. Please if someone can advise on the above.   Thank you,   regards, Moh.
Hello Dural, I think the issue is related to pass4SymmKey, have you ever change that key? If so, please share what files should be changed? and if you have any guideline for that, this will be much ... See more...
Hello Dural, I think the issue is related to pass4SymmKey, have you ever change that key? If so, please share what files should be changed? and if you have any guideline for that, this will be much helpful.
     
Hello Rick, I tried Live Capture, but it gave the same error, I think the issue is related to pass4SymmKey.
Hi, I have two fields, both these fields will be in two different events, now  i want to search for events, where aggr_id=*session_ID*, basically i'm looking to search for field1=*field2* field1: s... See more...
Hi, I have two fields, both these fields will be in two different events, now  i want to search for events, where aggr_id=*session_ID*, basically i'm looking to search for field1=*field2* field1: session_ID= 1234567890 field2: aggr_id= ldt:1234567890:09821
my free 60 days trial got expired and now I have updated the license to a free trial, but now I'm unable to use search head of Splunk.  each time I get this "Error in 'lit search' command: Your Splun... See more...
my free 60 days trial got expired and now I have updated the license to a free trial, but now I'm unable to use search head of Splunk.  each time I get this "Error in 'lit search' command: Your Splunk license expired or you have exceeded your license limit too many times. Renew your Splunk license by visiting www.splunk.com/store or calling 866.GET.SPLUNK."   I have already updated the license and restarted the application. Please help me with this. 
Hello @gcusello  Based on your query, it will return the value from lookup values. Once return the values the index will use it to search the events. What  I am looking for is to check only if the... See more...
Hello @gcusello  Based on your query, it will return the value from lookup values. Once return the values the index will use it to search the events. What  I am looking for is to check only if the events fields severity location vehicle value is present inside lookup field values. Anyway, I appreciate your idea. Thank you.
haha, so you are the same one, awesome!
Hi @skramp, if you are talking about this post , i am the post writer, thought to post the question in Splunk community to get support in this. Below is one of the CSVs i have tried import: Servic... See more...
Hi @skramp, if you are talking about this post , i am the post writer, thought to post the question in Splunk community to get support in this. Below is one of the CSVs i have tried import: ServiceTitle;ServiceDescription;DependentServices; Splunk;;SHC | IND; SHC;;Server1; IND;;server2; server1;;; server2;;; in the 1st environment it gives: File preview : 0 total lines   in the 2nd environment, it gets imported successfully, displaying all the rows.
I have an alert which I am trying to throttle based on few fields from the alert on condition if it triggers once then it shouldn't trigger for next 3 days unless it has different results. But the a... See more...
I have an alert which I am trying to throttle based on few fields from the alert on condition if it triggers once then it shouldn't trigger for next 3 days unless it has different results. But the alert is running every 15 mins and I can see same results of the alert every 15 mins. My alert is outputting the results to another index Example: blah blah , ,  , , ,  , , ,  | collect index=testindex sourcetype=testsourcetype Based on my research, I came across a post where it says  Since a pipe command is still part of the search, throttling would have no effect o because the search hasn't completed yet and can't be throttled. I think this because the front end says After an alert is triggered, subsequent alerts will not be triggered until after the throttle period, but that doesn't say "they aren't run" Is it the case? If so how can I stop updating the duplicate values in my index