All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This is perfect. Thank you! Only had to add the missing "by" in  | eventstats values(pod_name_all) as pod_name_all importance index=abc sourcetype=kubectl | lookup pod_list pod_name_lookup as pod_n... See more...
This is perfect. Thank you! Only had to add the missing "by" in  | eventstats values(pod_name_all) as pod_name_all importance index=abc sourcetype=kubectl | lookup pod_list pod_name_lookup as pod_name OUTPUT pod_name_lookup | where sourcetype == "kubectl" | bin span=1h@h _time | stats values(pod_name_lookup) as pod_name_lookup values(pod_name_all) as pod_name_all by importance _time | append [ inputlookup pod_list | rename pod_name_lookup as pod_name_all] | eventstats values(pod_name_all) as pod_name_all by importance | eval missing = if(isnull(pod_name_all), pod_name_all, mvappend(missing, mvmap(pod_name_all, if(pod_name_all IN (pod_name_lookup), null(), pod_name_all)))) | where isnotnull(missing) | timechart span=1m@m dc(missing) by importance
Hi Team,   is it possible to update/enrich a notable after executing a playbook in splunk soar and that execution output must be attached in the Splunk notable. Example:   Assume I have correlat... See more...
Hi Team,   is it possible to update/enrich a notable after executing a playbook in splunk soar and that execution output must be attached in the Splunk notable. Example:   Assume I have correlation search named one and this triggers a notable and run a playbook actions. Now once the search triggers and notable is created, the action run a playbook should execute in soar and attach that output to the notable created. You think of this attaching ip reputation/geo locations of an ip to the notable so that soc can work without logging into virus total or any other sites.   Thank you
Hi @Jyo_Reel, 8089 is a management port and it's already encrypted. Anyway, the traffic port (by default 9997) can be encrypted, for more details see at https://docs.splunk.com/Documentation/Splunk... See more...
Hi @Jyo_Reel, 8089 is a management port and it's already encrypted. Anyway, the traffic port (by default 9997) can be encrypted, for more details see at https://docs.splunk.com/Documentation/Splunk/9.2.1/Security/ConfigureSplunkforwardingtousesignedcertificates#:~:text=You%20can%20use%20transport%20layer,create%20and%20sign%20them%20yourself. Ciao. Giuseppe
I know someone whom has used this, its a flavour of Red hat / Cent OS - so you should be fine.  Here's the Splunk OS support matrix for the kernel versions supported  https://docs.splunk.com/Do... See more...
I know someone whom has used this, its a flavour of Red hat / Cent OS - so you should be fine.  Here's the Splunk OS support matrix for the kernel versions supported  https://docs.splunk.com/Documentation/Splunk/9.2.1/Installation/SystemRequirements 
Hello, Can 8089 port traffic be encrypted? What are the pros and cons?
If I have 6 search peers configured in the distsearch.conf file but 3 of them go down, can Splunk recognize that a host is down and continue skipping down the list until it gets a live host?
Hello, Does Splunk 9.0 compatible with Oracle Linux?
That WARN is just for extra security. Its still having issues with the server.pem file  I'm out of options to check mate, consider logging a support call, or you could if this is an option to you... See more...
That WARN is just for extra security. Its still having issues with the server.pem file  I'm out of options to check mate, consider logging a support call, or you could if this is an option to you, backup /etc/apps folder and re-install Splunk,  and restore the backed up /etc/apps folder, I know this is a drastic step...but might be quicker. 
I've lately installed MISP add-on app from Splunk to integrate our MISP environment feed to Splunk app using the URL and the Auth API.  That being said, I was able to configure it with details requir... See more...
I've lately installed MISP add-on app from Splunk to integrate our MISP environment feed to Splunk app using the URL and the Auth API.  That being said, I was able to configure it with details required in MISP add-on app. However, after the configuration, I'm getting the following error: (Restricting results of the "rest" operator to the local instance because you do not have the "dispatch_rest_to_indexers" capability). Furthermore, by looking into the role capabilities under Splunk UI setting, I dont see "dispatch_rest_to_indexers" role either. Could someone please assist?
Thank you! 
I have checked the log, there is nothing there. In fact there is only 1 log with new entries. These are the last entries from splunkd-utility.log: 05-17-2024 16:44:40.570 +0200 INFO ServerConfig - F... See more...
I have checked the log, there is nothing there. In fact there is only 1 log with new entries. These are the last entries from splunkd-utility.log: 05-17-2024 16:44:40.570 +0200 INFO ServerConfig - Found no hostname options in server.conf. Will attempt to use default for now. 05-17-2024 16:44:40.570 +0200 INFO ServerConfig - Host name option is "". 05-17-2024 16:44:40.570 +0200 INFO ServerConfig - TLS Sidecar disabled 05-17-2024 16:44:40.570 +0200 WARN SSLOptions - server.conf/[sslConfig]/sslVerifyServerCert is false disabling certificate validation; must be set to "true" for increased security 05-17-2024 16:44:40.570 +0200 INFO ServerConfig - No 'C:\Program Files\Splunk\etc\auth\server.pem' certificate found. Splunkd communication will not work without this. If this is a fresh installation, this should be OK. 05-17-2024 16:44:40.586 +0200 INFO ServerConfig - disableSSLShutdown=0 05-17-2024 16:44:40.586 +0200 INFO ServerConfig - Setting search process to have long life span: enable_search_process_long_lifespan=1 05-17-2024 16:44:40.586 +0200 INFO ServerConfig - enableTeleportSupervisor=0, scsEvironment=production 05-17-2024 16:44:40.586 +0200 INFO ServerConfig - certificateStatusValidationMethod is not set, defaulting to none. 05-17-2024 16:44:40.586 +0200 INFO ServerConfig - Splunk is starting with EC-SSC disabled cacert.pem is valid till 2027 and I have checked server.conf, which has no entry for hostname. But this seems to be normal, have checked against another installation.
This has fixed by Splunk. It works at least 9.1.3+ versions as expected. _meta = foo::bar
Update for old post as splunk has fixed this. Currently (at lest 9.1.3+) you can use _meta also in HEC's inputs.conf.
Hello Splunk Community, To combine two search results where you are interested in the last x/y events from each subquery, you can utilize streaming commands effectively by piping the output of the f... See more...
Hello Splunk Community, To combine two search results where you are interested in the last x/y events from each subquery, you can utilize streaming commands effectively by piping the output of the first search into the second one. For instance, you can use command-line tools like grep, awk, or sed to filter and merge the results. If you're dealing with more complex data, consider using a programming language like Python with libraries such as pandas for better manipulation and merging of search results. Finally, to enhance your streaming and searching experience, I recommend you install the Spotify Web Mod PC. This mod can help streamline your music searches and organize your playlists efficiently, providing a seamless integration into your overall workflow. Best Regards!!
Hi you are fulfilling these requirements https://docs.splunk.com/Documentation/Splunk/latest/Data/DataIngest ? r. Ismo
Hi it shouldn't bee to much. Could you show your inputs.conf inside </> block? Also which UF version and OS you have? Have you also check that your UF user have access to this new (?) or truncate... See more...
Hi it shouldn't bee to much. Could you show your inputs.conf inside </> block? Also which UF version and OS you have? Have you also check that your UF user have access to this new (?) or truncated file? What  splunk list inputstatus splunk list monitor commands outputs are? Can you find this individual file from those and what status it has? r. Ismo
Could give some more information about your issue and what you have already try?
A while ago, I had to enable a number of alerts (saved searches) for an app I created a simple bash file (Assuming your Linux based) which used the API, and this ran through them. Take note of wha... See more...
A while ago, I had to enable a number of alerts (saved searches) for an app I created a simple bash file (Assuming your Linux based) which used the API, and this ran through them. Take note of what @PickleRick  said  you could end up with a performance issue if you enable too  many.  This worked for me.  You need to create a Splunk token, and get a list your target alerts (saved searches) in your App , then add them to the bash script, a bit of home work, yes, but it got the job done in the end for me.  Here is an example bash script  #!/bin/bash # Define your variables TOKEN="MY SPLUNK TOKEN" SERVER="https://MY_SPLUNK_SERVER_SH:8089" APP="MY_APP" # Define alerts ALERTS=("my_alert1" "my_alert2") # Loop through each alert and enable it for ALERT in "${ALERTS[@]}"; do echo "Enabling alert: $ALERT" curl -X POST -k -H "Authorization: Bearer $TOKEN" "$SERVER/servicesNS/nobody/$APP/saved/searches/$ALERT" -d disabled=0 if [ $? -eq 0 ]; then echo "Alert $ALERT enabled successfully." sleep 10 else echo "Failed to enable alert $ALERT." fi done You can use the below to find your alert searches names  | rest splunk_server=local /services/saved/searches | fields splunk_server, author, title, disabled, eai:acl.app, eai:acl.owner, eai:acl.sharing, id, search | rename title AS saved_search_name eai:acl.app AS app eai:acl.owner AS owner eai:acl.sharing AS sharing search AS spl_code | eval is_enabled = case(disabled >=1, "disabled",1=1, "enabled") ```| search app=YOUR APP NAME ``` | table splunk_server, author, saved_search_name, disabled, is_enabled, app, owner, sharing, spl_code    
Hi one additional hints to look where this action field is defined.  You could use btool to look that. You will need a cli access to check this. Another option was some addtional app from splunkbas... See more...
Hi one additional hints to look where this action field is defined.  You could use btool to look that. You will need a cli access to check this. Another option was some addtional app from splunkbase. splunk btool props list --debug | egrep action Above command (run it as splunk service user) should show where (which conf file contains that definition) it has defined and how. Especially if you have some TAs etc. which also use same name for field that could be the reason.  There are some other parameter which you may need if you need to run this in some user + app contex. You should check those from docs or use help option on cli. r. Ismo
If I have understand right this is doable also on IHF + DS combination, but it could be tricky as those functions are different. Also if you have more than 50 client then you should/must have a separa... See more...
If I have understand right this is doable also on IHF + DS combination, but it could be tricky as those functions are different. Also if you have more than 50 client then you should/must have a separate DS server for those. Since 9.2.x DS server expects that there are some local indexes where it stores information about DS actions. If you haven't those or you are sending all events into your real indexers then this didn't work. If I recall right there are some instructions how this can do, but I prefer that you will install one new dedicated server for DS and use those local indexes for DS function. That way it will be much easier to get it to work. Other option is look from community, docs and Splunk usergroup Slack how this can do in combined IHF + DS. It needs some additional tuning for outputs.conf at least, maybe it was some other conf files too?