All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you! 
I have checked the log, there is nothing there. In fact there is only 1 log with new entries. These are the last entries from splunkd-utility.log: 05-17-2024 16:44:40.570 +0200 INFO ServerConfig - F... See more...
I have checked the log, there is nothing there. In fact there is only 1 log with new entries. These are the last entries from splunkd-utility.log: 05-17-2024 16:44:40.570 +0200 INFO ServerConfig - Found no hostname options in server.conf. Will attempt to use default for now. 05-17-2024 16:44:40.570 +0200 INFO ServerConfig - Host name option is "". 05-17-2024 16:44:40.570 +0200 INFO ServerConfig - TLS Sidecar disabled 05-17-2024 16:44:40.570 +0200 WARN SSLOptions - server.conf/[sslConfig]/sslVerifyServerCert is false disabling certificate validation; must be set to "true" for increased security 05-17-2024 16:44:40.570 +0200 INFO ServerConfig - No 'C:\Program Files\Splunk\etc\auth\server.pem' certificate found. Splunkd communication will not work without this. If this is a fresh installation, this should be OK. 05-17-2024 16:44:40.586 +0200 INFO ServerConfig - disableSSLShutdown=0 05-17-2024 16:44:40.586 +0200 INFO ServerConfig - Setting search process to have long life span: enable_search_process_long_lifespan=1 05-17-2024 16:44:40.586 +0200 INFO ServerConfig - enableTeleportSupervisor=0, scsEvironment=production 05-17-2024 16:44:40.586 +0200 INFO ServerConfig - certificateStatusValidationMethod is not set, defaulting to none. 05-17-2024 16:44:40.586 +0200 INFO ServerConfig - Splunk is starting with EC-SSC disabled cacert.pem is valid till 2027 and I have checked server.conf, which has no entry for hostname. But this seems to be normal, have checked against another installation.
This has fixed by Splunk. It works at least 9.1.3+ versions as expected. _meta = foo::bar
Update for old post as splunk has fixed this. Currently (at lest 9.1.3+) you can use _meta also in HEC's inputs.conf.
Hello Splunk Community, To combine two search results where you are interested in the last x/y events from each subquery, you can utilize streaming commands effectively by piping the output of the f... See more...
Hello Splunk Community, To combine two search results where you are interested in the last x/y events from each subquery, you can utilize streaming commands effectively by piping the output of the first search into the second one. For instance, you can use command-line tools like grep, awk, or sed to filter and merge the results. If you're dealing with more complex data, consider using a programming language like Python with libraries such as pandas for better manipulation and merging of search results. Finally, to enhance your streaming and searching experience, I recommend you install the Spotify Web Mod PC. This mod can help streamline your music searches and organize your playlists efficiently, providing a seamless integration into your overall workflow. Best Regards!!
Hi you are fulfilling these requirements https://docs.splunk.com/Documentation/Splunk/latest/Data/DataIngest ? r. Ismo
Hi it shouldn't bee to much. Could you show your inputs.conf inside </> block? Also which UF version and OS you have? Have you also check that your UF user have access to this new (?) or truncate... See more...
Hi it shouldn't bee to much. Could you show your inputs.conf inside </> block? Also which UF version and OS you have? Have you also check that your UF user have access to this new (?) or truncated file? What  splunk list inputstatus splunk list monitor commands outputs are? Can you find this individual file from those and what status it has? r. Ismo
Could give some more information about your issue and what you have already try?
A while ago, I had to enable a number of alerts (saved searches) for an app I created a simple bash file (Assuming your Linux based) which used the API, and this ran through them. Take note of wha... See more...
A while ago, I had to enable a number of alerts (saved searches) for an app I created a simple bash file (Assuming your Linux based) which used the API, and this ran through them. Take note of what @PickleRick  said  you could end up with a performance issue if you enable too  many.  This worked for me.  You need to create a Splunk token, and get a list your target alerts (saved searches) in your App , then add them to the bash script, a bit of home work, yes, but it got the job done in the end for me.  Here is an example bash script  #!/bin/bash # Define your variables TOKEN="MY SPLUNK TOKEN" SERVER="https://MY_SPLUNK_SERVER_SH:8089" APP="MY_APP" # Define alerts ALERTS=("my_alert1" "my_alert2") # Loop through each alert and enable it for ALERT in "${ALERTS[@]}"; do echo "Enabling alert: $ALERT" curl -X POST -k -H "Authorization: Bearer $TOKEN" "$SERVER/servicesNS/nobody/$APP/saved/searches/$ALERT" -d disabled=0 if [ $? -eq 0 ]; then echo "Alert $ALERT enabled successfully." sleep 10 else echo "Failed to enable alert $ALERT." fi done You can use the below to find your alert searches names  | rest splunk_server=local /services/saved/searches | fields splunk_server, author, title, disabled, eai:acl.app, eai:acl.owner, eai:acl.sharing, id, search | rename title AS saved_search_name eai:acl.app AS app eai:acl.owner AS owner eai:acl.sharing AS sharing search AS spl_code | eval is_enabled = case(disabled >=1, "disabled",1=1, "enabled") ```| search app=YOUR APP NAME ``` | table splunk_server, author, saved_search_name, disabled, is_enabled, app, owner, sharing, spl_code    
Hi one additional hints to look where this action field is defined.  You could use btool to look that. You will need a cli access to check this. Another option was some addtional app from splunkbas... See more...
Hi one additional hints to look where this action field is defined.  You could use btool to look that. You will need a cli access to check this. Another option was some addtional app from splunkbase. splunk btool props list --debug | egrep action Above command (run it as splunk service user) should show where (which conf file contains that definition) it has defined and how. Especially if you have some TAs etc. which also use same name for field that could be the reason.  There are some other parameter which you may need if you need to run this in some user + app contex. You should check those from docs or use help option on cli. r. Ismo
If I have understand right this is doable also on IHF + DS combination, but it could be tricky as those functions are different. Also if you have more than 50 client then you should/must have a separa... See more...
If I have understand right this is doable also on IHF + DS combination, but it could be tricky as those functions are different. Also if you have more than 50 client then you should/must have a separate DS server for those. Since 9.2.x DS server expects that there are some local indexes where it stores information about DS actions. If you haven't those or you are sending all events into your real indexers then this didn't work. If I recall right there are some instructions how this can do, but I prefer that you will install one new dedicated server for DS and use those local indexes for DS function. That way it will be much easier to get it to work. Other option is look from community, docs and Splunk usergroup Slack how this can do in combined IHF + DS. It needs some additional tuning for outputs.conf at least, maybe it was some other conf files too?
Ok, then you have that role. How you have defined this alert? Can you give a screenshot about it?
That is an interesting issue since  you will find everything ok  related to configuration and best guess is that you are hitting the maximum limit of knowledge bundle replication and max_content_leng... See more...
That is an interesting issue since  you will find everything ok  related to configuration and best guess is that you are hitting the maximum limit of knowledge bundle replication and max_content_length   Below is the new recommended setting as per splunk document  on  search head distesearch.conf [replicationSettings] maxBundleSize = 4096   you must also increase max_content_length in server.conf on the indexers (search peers) to at least 4GB and also on search head and deployer.  [httpServer] max_content_length = 4294967296
There may be something in splunkd.log(not sure) find this in $SPLUNK_HOME\var\log\splunk Whats the output of this? (I'm starting to think the root cacert.pem has something to do with this.) openssl... See more...
There may be something in splunkd.log(not sure) find this in $SPLUNK_HOME\var\log\splunk Whats the output of this? (I'm starting to think the root cacert.pem has something to do with this.) openssl x509 -in "c:\Program Files\Splunk\etc\auth\cacert.pem" -text -noout Does it show its expired? may be this has something to do with it. Try and rename that file cacert.pem or it could be ca.pem and do a restart
Hi @joe06031990 , it's a request from many of us, go in Splunk ideas and vote for it: maybe someone in the Splunk project will consider the request! Ciao. Giuseppe
Hi @BB_MW  in the interesting fields you have fields contained at least in 20% of the events, if you add action=* to you search do you see the field? Then, did youchecked if the sourcetype that you... See more...
Hi @BB_MW  in the interesting fields you have fields contained at least in 20% of the events, if you add action=* to you search do you see the field? Then, did youchecked if the sourcetype that you associated to the field extraction is in the vents that you're searching? the new field extraction is related only tho the declared sourcetype. Ciao. Giuseppe
We have splunk installed in linux machine under /opt/splunk.  We have created add on and added python code and that is getting saved in modalert_test_webhook_helper.py under "/opt/splunk/etc/apps/sp... See more...
We have splunk installed in linux machine under /opt/splunk.  We have created add on and added python code and that is getting saved in modalert_test_webhook_helper.py under "/opt/splunk/etc/apps/splunk_app_addon-builder/local/validation/TA-splunk-webhook-final/bin/ta_splunk_webhook_final" We wanted to create one parameter in any config file with value in the form of list of rest api endpoints and read that in python code. If rest api endpoint entered by user while adding the action to alert is present in the list added in config file then only need to proceed with process_data action in python else display a message saying rest api endpoint is not present So now we wanted to know In which conf to define the parameter and what changes to make in python file and which python file to be used as there are many python files under /bin directory. Also after making changes in any conf or python files and restarting the changes are not getting saved. How to get it saved after restarting splunk? PFA screenshots of conf and python files. Kindly help with any solution.
yes @shocko  i have manually set the executable permissions for the ones that are missing and it worked.
Hi there, thank you for your idea, but unfortunately it was not working: The path is correct. Is there any way to find out, why the generation is failing? Checked some logs, but couldn't find a... See more...
Hi there, thank you for your idea, but unfortunately it was not working: The path is correct. Is there any way to find out, why the generation is failing? Checked some logs, but couldn't find anything that was helping...
Hello @VatsalJagani  I have the lookup table file and definition where the permissions are set to app level and when i am running the search inside app, it is fetching results but when i am runnin... See more...
Hello @VatsalJagani  I have the lookup table file and definition where the permissions are set to app level and when i am running the search inside app, it is fetching results but when i am running it in the main search and reporting app i am getting the error this lookup table requires a .csv or kvstore lookup defintion, how do we need to fix this error any idea?     Thanks