All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

A while ago, I had to enable a number of alerts (saved searches) for an app I created a simple bash file (Assuming your Linux based) which used the API, and this ran through them. Take note of wha... See more...
A while ago, I had to enable a number of alerts (saved searches) for an app I created a simple bash file (Assuming your Linux based) which used the API, and this ran through them. Take note of what @PickleRick  said  you could end up with a performance issue if you enable too  many.  This worked for me.  You need to create a Splunk token, and get a list your target alerts (saved searches) in your App , then add them to the bash script, a bit of home work, yes, but it got the job done in the end for me.  Here is an example bash script  #!/bin/bash # Define your variables TOKEN="MY SPLUNK TOKEN" SERVER="https://MY_SPLUNK_SERVER_SH:8089" APP="MY_APP" # Define alerts ALERTS=("my_alert1" "my_alert2") # Loop through each alert and enable it for ALERT in "${ALERTS[@]}"; do echo "Enabling alert: $ALERT" curl -X POST -k -H "Authorization: Bearer $TOKEN" "$SERVER/servicesNS/nobody/$APP/saved/searches/$ALERT" -d disabled=0 if [ $? -eq 0 ]; then echo "Alert $ALERT enabled successfully." sleep 10 else echo "Failed to enable alert $ALERT." fi done You can use the below to find your alert searches names  | rest splunk_server=local /services/saved/searches | fields splunk_server, author, title, disabled, eai:acl.app, eai:acl.owner, eai:acl.sharing, id, search | rename title AS saved_search_name eai:acl.app AS app eai:acl.owner AS owner eai:acl.sharing AS sharing search AS spl_code | eval is_enabled = case(disabled >=1, "disabled",1=1, "enabled") ```| search app=YOUR APP NAME ``` | table splunk_server, author, saved_search_name, disabled, is_enabled, app, owner, sharing, spl_code    
Hi one additional hints to look where this action field is defined.  You could use btool to look that. You will need a cli access to check this. Another option was some addtional app from splunkbas... See more...
Hi one additional hints to look where this action field is defined.  You could use btool to look that. You will need a cli access to check this. Another option was some addtional app from splunkbase. splunk btool props list --debug | egrep action Above command (run it as splunk service user) should show where (which conf file contains that definition) it has defined and how. Especially if you have some TAs etc. which also use same name for field that could be the reason.  There are some other parameter which you may need if you need to run this in some user + app contex. You should check those from docs or use help option on cli. r. Ismo
If I have understand right this is doable also on IHF + DS combination, but it could be tricky as those functions are different. Also if you have more than 50 client then you should/must have a separa... See more...
If I have understand right this is doable also on IHF + DS combination, but it could be tricky as those functions are different. Also if you have more than 50 client then you should/must have a separate DS server for those. Since 9.2.x DS server expects that there are some local indexes where it stores information about DS actions. If you haven't those or you are sending all events into your real indexers then this didn't work. If I recall right there are some instructions how this can do, but I prefer that you will install one new dedicated server for DS and use those local indexes for DS function. That way it will be much easier to get it to work. Other option is look from community, docs and Splunk usergroup Slack how this can do in combined IHF + DS. It needs some additional tuning for outputs.conf at least, maybe it was some other conf files too?
Ok, then you have that role. How you have defined this alert? Can you give a screenshot about it?
That is an interesting issue since  you will find everything ok  related to configuration and best guess is that you are hitting the maximum limit of knowledge bundle replication and max_content_leng... See more...
That is an interesting issue since  you will find everything ok  related to configuration and best guess is that you are hitting the maximum limit of knowledge bundle replication and max_content_length   Below is the new recommended setting as per splunk document  on  search head distesearch.conf [replicationSettings] maxBundleSize = 4096   you must also increase max_content_length in server.conf on the indexers (search peers) to at least 4GB and also on search head and deployer.  [httpServer] max_content_length = 4294967296
There may be something in splunkd.log(not sure) find this in $SPLUNK_HOME\var\log\splunk Whats the output of this? (I'm starting to think the root cacert.pem has something to do with this.) openssl... See more...
There may be something in splunkd.log(not sure) find this in $SPLUNK_HOME\var\log\splunk Whats the output of this? (I'm starting to think the root cacert.pem has something to do with this.) openssl x509 -in "c:\Program Files\Splunk\etc\auth\cacert.pem" -text -noout Does it show its expired? may be this has something to do with it. Try and rename that file cacert.pem or it could be ca.pem and do a restart
Hi @joe06031990 , it's a request from many of us, go in Splunk ideas and vote for it: maybe someone in the Splunk project will consider the request! Ciao. Giuseppe
Hi @BB_MW  in the interesting fields you have fields contained at least in 20% of the events, if you add action=* to you search do you see the field? Then, did youchecked if the sourcetype that you... See more...
Hi @BB_MW  in the interesting fields you have fields contained at least in 20% of the events, if you add action=* to you search do you see the field? Then, did youchecked if the sourcetype that you associated to the field extraction is in the vents that you're searching? the new field extraction is related only tho the declared sourcetype. Ciao. Giuseppe
We have splunk installed in linux machine under /opt/splunk.  We have created add on and added python code and that is getting saved in modalert_test_webhook_helper.py under "/opt/splunk/etc/apps/sp... See more...
We have splunk installed in linux machine under /opt/splunk.  We have created add on and added python code and that is getting saved in modalert_test_webhook_helper.py under "/opt/splunk/etc/apps/splunk_app_addon-builder/local/validation/TA-splunk-webhook-final/bin/ta_splunk_webhook_final" We wanted to create one parameter in any config file with value in the form of list of rest api endpoints and read that in python code. If rest api endpoint entered by user while adding the action to alert is present in the list added in config file then only need to proceed with process_data action in python else display a message saying rest api endpoint is not present So now we wanted to know In which conf to define the parameter and what changes to make in python file and which python file to be used as there are many python files under /bin directory. Also after making changes in any conf or python files and restarting the changes are not getting saved. How to get it saved after restarting splunk? PFA screenshots of conf and python files. Kindly help with any solution.
yes @shocko  i have manually set the executable permissions for the ones that are missing and it worked.
Hi there, thank you for your idea, but unfortunately it was not working: The path is correct. Is there any way to find out, why the generation is failing? Checked some logs, but couldn't find a... See more...
Hi there, thank you for your idea, but unfortunately it was not working: The path is correct. Is there any way to find out, why the generation is failing? Checked some logs, but couldn't find anything that was helping...
Hello @VatsalJagani  I have the lookup table file and definition where the permissions are set to app level and when i am running the search inside app, it is fetching results but when i am runnin... See more...
Hello @VatsalJagani  I have the lookup table file and definition where the permissions are set to app level and when i am running the search inside app, it is fetching results but when i am running it in the main search and reporting app i am getting the error this lookup table requires a .csv or kvstore lookup defintion, how do we need to fix this error any idea?     Thanks
Hi Giuseppe, I am not talking about XML tags, but HTML tags. HTML tags are used to format the text and do not give any information about fields. Text between <b> and </b> will be formatted in bold a... See more...
Hi Giuseppe, I am not talking about XML tags, but HTML tags. HTML tags are used to format the text and do not give any information about fields. Text between <b> and </b> will be formatted in bold and <br> is a line break. I would like to remove these unnecessary characters from my inputs.   Ciao! Tommaso
You could try to do it using REST API but I'd say it's not a best idea. If you enable too many searches, you're gonna kill your servers. So it's best to enable those you need, not just all there are.
Hi there, Sorry, I should also have added that I'm searching in Smart Mode. The results, though, are the same for Verbose Mode. I hadn't thought of doing a stats on the fields but I can confirm tha... See more...
Hi there, Sorry, I should also have added that I'm searching in Smart Mode. The results, though, are the same for Verbose Mode. I hadn't thought of doing a stats on the fields but I can confirm that count(action) is still 0 and the count(change_type) has a positive value.
I also experienced the same issue. Does anyone have a solution to this?
Don't bother with the Interesting Fields sidebar. It contains only _extracted fields_ (so if you're searching in fast mode you'll get just the basic metadata fields or the ones explicitly used) which... See more...
Don't bother with the Interesting Fields sidebar. It contains only _extracted fields_ (so if you're searching in fast mode you'll get just the basic metadata fields or the ones explicitly used) which are present in at least 20% of the results. So this is not the way to verify if the field is properly extracted. Also remember that when using fast mode only the fields explicitly used are extracted. BTW, try your search with | stats count count(action) count(change_type)  
Hi, I appreciate that there are numerous questions on here for similar problems but, after reading quite a few of them, nothing seems to quite fit my scenario / issue. I am trying to extract a fie... See more...
Hi, I appreciate that there are numerous questions on here for similar problems but, after reading quite a few of them, nothing seems to quite fit my scenario / issue. I am trying to extract a field from an event and call it 'action'. The entry in the props.conf looks like : EXTRACT-pam_action = (Action\: (?P<action>\[[^:\]]+]) ) I know that the extraction is working as there is a field alias later in the props.conf : FIELDALIAS-aob_gen_syslog_alias_32 = action AS change_type When I run a basic generating search on the index & sourcetype, the field 'action' does not appear in the 'Interesting Fields' but the 'change_type' alias does appear! The regex is fine as I can create the 'action' field OK if I add the rex to the search. I have also added the exact same regex to the props.conf file but called the field 'action1' and that field is displayed OK. Another test I tried is to create a field alias for the action1 field name called 'action' : FIELDALIAS-aob_gen_syslog_alias_30 = action1 AS action FIELDALIAS-aob_gen_syslog_alias_32 = action1 AS change_type 'change_type' is visible but, again 'action' is not visible. Finally my search "index=my_index action=*" produces 0 results whereas "index=my_index change_type-*" produces an accurate output. I have looked in the props and transforms configs across my searchhead and can't see anything that might be 'removing' my field extraction but, I guess my question is..... how can I debug the creation ( or not ) of a field name? I have a deep suspicion that it is something to do with one one the Windows TA's apps that we have installed but am struggling to locate the offending configuration Many thanks for any help. Mark
Hi,   Is there a way of bulk enabling alerts in Splunk enterprise?   Thanks,   Joe
@splunkreal  : Thanks .. I tried the command but no luck