All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am facing strange issue with a Splunk Forwarder where on some servers of the same role is CPU usage 0-3% and the others are around 15%. It doesn't sound bad on the 1st hand, but it did caus... See more...
Hello, I am facing strange issue with a Splunk Forwarder where on some servers of the same role is CPU usage 0-3% and the others are around 15%. It doesn't sound bad on the 1st hand, but it did cause us issues with deployment and such behavior is dangerous for live services if it will grow. It started around 3 weeks ago with installed 9.3.0 on the Windows Server 2019 VMs with 8 CPU cores and 24GB RAM. I did update Forwarder to the 9.3.1 and the behavior is the same.  For example, we have 8 servers with the same setup and apps running on, traffic to them is load balanced and very similar, log files amount and size is also very similar. 5 servers are affected, 3 not. All of them have set 10 inputs, from what are 4 perfmonitors (CPU,RAM,Disk space and Web Services) and 6 inputs are checking around 40 log files. Any sugestion what to check to understand what is happening?  
Hi, I'm new to Splunk DB connector. Having Splunk on-prem version and trying to pull data from Snowflake audit logs and push to cribl.io (for log optimization purpose and reducing log size).  As ... See more...
Hi, I'm new to Splunk DB connector. Having Splunk on-prem version and trying to pull data from Snowflake audit logs and push to cribl.io (for log optimization purpose and reducing log size).  As Cribl.io doesn't have connector for Snowflake (and not in near roadmap), wondering if I use Splunk DB connect to read data from Snowflake and send to Cribl.io followed by sending to destination i.e. Splunk (for log monitoring and alerting) Question: Would this be "double hop" to Splunk, if yes, any Splunk charges be applicable while Splunk DB connect reading from Snowflake and sending to Cribl.io? Thank you! Avi
I'm trying to transform a error log Below is a sample log (nginx_error) 2024/11/15 13:10:11 [error] 4080#4080: *260309 connect() failed (111: Connection refused) while connecting to upstream, cli... See more...
I'm trying to transform a error log Below is a sample log (nginx_error) 2024/11/15 13:10:11 [error] 4080#4080: *260309 connect() failed (111: Connection refused) while connecting to upstream, client: 210.54.88.72, server: mpos.mintpayments.com, request: "GET /payment-mint/cnpPayments/v1/publicKeys?callback=jQuery360014295356911736334_1731369073329&X-Signature=plkb810sFSSSIbASLb818BMXxgtUM76QNvhI%252FBA%253D&X-Timestamp=1731368881376&X-ApiKey=CSSSAPXXXXXXPxmO7kjMi&X-CompanyToken=d1111e8lV1mpvljiCD2zRgEEU121p&_=1731369073330 HTTP/1.1", upstream: "https://10.20.3.59:28076//cnpPayments/v1/publicKeys?callback=jQuery360014295356911736334_1731369073329&X-Signature=plkb810sFY3jmET4IbASLb818BMXxgtUM76QNvhI%252FBA%253D&X-Timestamp=1731368881376&X-ApiKey=CNPAPIIk7elIMDTunrIGMuXPxmO7kjMi&X-CompanyToken=dX6E3yDe8lV1mpvljiCD2zRgEEU121p&_=173123073330", host: "test.mintpayments.com", referrer: "https://vicky9.mintpayments.com/testing??asd We are trying to 1) GET query parameters must not be logged 2) Referrer must not contain the query string I have updated my config as below [04:59 PM] [root@dev-web01 splunkforwarder]# cat ./etc/system/local/props.conf [source::///var/log/devops/nginx_error.log] TRANSFORMS-sanitize_referer = remove_get_query_params, remove_referer_query [04:59 PM] [root@dev-web01 splunkforwarder]# cat ./etc/system/local/transforms.conf [remove_get_query_params] REGEX = (GET|POST|HEAD) ([^? ]+)\?.* FORMAT = $1 $2 DEST_KEY = _raw REPEAT_MATCH = true [remove_referer_query] REGEX = referrer: "(.*?)\?.*" FORMAT = referrer: "$1" DEST_KEY = _raw REPEAT_MATCH = true Verified that the regex is correct and when I run below to list the changes, its present /opt/splunkforwarder/bin/splunk btool transforms list --debug /opt/splunkforwarder/bin/splunk btool props list --debug Still I can see no transformation in the logs, what could be the issue here ? We are using custom splunkforwarder in our env.
Hello, There is an app for Aruba Edgeconnect - https://splunkbase.splunk.com/app/6302 Is there any documentation on how to get the logs ingested into splunk from Aruba EdgeConnect ?  
Hello, I need some help in imperva to Splunk Cloud integration. I am using the Splunk Addon for AWS on my cloud SH, and from there i configured imperva account using Key ID and secret Id with the Im... See more...
Hello, I need some help in imperva to Splunk Cloud integration. I am using the Splunk Addon for AWS on my cloud SH, and from there i configured imperva account using Key ID and secret Id with the Imperva S3. And in inputs I am using the Incremental S3, the logs are coming in to Splunk Cloud but there is some miss too I can see some logs are available on AWS S3 but some how those are ingesting in to Splunk Cloud, I am not getting any help online reason I am posting question here.  Please advice some body.    Thank you.
Hi all, We have specific AD group for specific application and we create index for that app and restrict access to that AD group (for all app users of that specific app) for that specific index. Gen... See more...
Hi all, We have specific AD group for specific application and we create index for that app and restrict access to that AD group (for all app users of that specific app) for that specific index. Generally they will be given FQDN/Hostname to us and we will be mapping to the particular index. In this way we have numerous AD groups and indexes. But our client is expecting less AD groups because it is difficult to maintain those many AD groups.  So, here my question... is there any chance to reduce AD groups by restricting specific to Source type rather than Index? So in one index can we have multiple applications where we can restrict them by sourcetype? If yes, please help me with the approach?  
Hello friends! Long time gawker, first time poster.  I wanted to share my recent journey on Backing up and Restoring Splunk User Search History for users that decided to migrate their User Search hi... See more...
Hello friends! Long time gawker, first time poster.  I wanted to share my recent journey on Backing up and Restoring Splunk User Search History for users that decided to migrate their User Search history to the KV Store using the feature mentioned in the release notes.  As of now, and as with all backups/restores, please make sure you test.  Hope this helps someone else.  Thanks to all that helped test and validate (and listen to me vent) along the way!  Please feel free to share your experiences if you use this feature or if I may have missed something as well.   I'll throw the code up shortly as well. https://docs.splunk.com/Documentation/Splunk/9.1.6/ReleaseNotes/MeetSplunk Preserve search history across search heads Search history is lost when users switch between various nodes in a search head cluster. This feature utilizes KV store to keep search history replicated across nodes. See search_history_storage_mode in limits.conf in the Admin Manual for information on using this functionality.   ### Backup Kvstore - pick your flavor of backing up (rest api, splunk cli, splunk app like "KV Store Tools Redux") # To backup just Search History  /opt/splunk/bin/splunk backup kvstore -archiveName `hostname`-SearchHistory_`date +%s`.tar.gz -appName system -collectionName SearchHistory   # To backup entire Kvstore (most likely a good idea)  /opt/splunk/bin/splunk backup kvstore -archiveName `hostname`-SearchHistory_`date +%s`.tar.gz       ### Restore archive # Change directory to location of archive backup cd /opt/splunk/var/lib/splunk/kvstorebackup # Locate archive to restore ls -lst # List archive files (optional, but helpful to see what's inside and how archive will extract to ensure you don't overwrite expected files)  tar ztvf SearchHistory_1731206815.tar.gz -rw------- splunk/splunk 197500 2024-11-10 02:46 system/SearchHistory/SearchHistory0.json # Extract archive or selected files tar zxvf SearchHistory_1731206815.tar.gz system/SearchHistory/SearchHistory0.json       ### Parse archive to prep to restore # Change directory to where archive was extracted cd /opt/splunk/var/lib/splunk/kvstorebackup/system/SearchHistory # Create/copy splunk_parse_search_history_kvstore_backup_per_user.py script to parse archives in directory to /tmp (or someplace else) and run on archive(s) ./splunk_parse_search_history_kvstore_backup_per_user.py /opt/splunk/var/lib/splunk/kvstorebackup/system/SearchHistory/SearchHistory0.json # List files created ls -ls SearchHistory0*  96 -rw-rw-r-- 1 splunk splunk  95858 Nov 14 23:12 SearchHistory0_admin.json 108 -rw-rw-r-- 1 splunk splunk 108106 Nov 14 23:12 SearchHistory0_nobody.json       ### Restore archives needed # NOTE:  To prevent SearchHistory leaking between users, you MUST restore to the corresponding user context # Either loop/iterate through restored files or do them one at a time calling the corresponding REST API curl -k -u admin https://localhost:8089/servicesNS/<user>/system/storage/collections/data/SearchHistory/batch_save -H "Content-Type: application/json" -d @SearchHistory0_<user>.json       ### Validate that the SearchHistory Kvstore was restored properly for the user through calling the REST API and/or also logging into Splunk as the user to test with, navigate to "Search & Reporting" and selecting "Search History" curl -k -u admin https://localhost:8089/servicesNS/<user>/system/storage/collections/data/SearchHistory         #### NOTE: There are default limits in kvstore that you need to account for if you're files are large!   If you run into problems, review your splunkd.log and/or the KV Store dashboards within the MC (Search --> KV Store) # /opt/splunk/bin/splunk btool limits list --debug kvstore /opt/splunk/etc/system/default/limits.conf           [kvstore] /opt/splunk/etc/system/default/limits.conf           max_accelerations_per_collection = 10 /opt/splunk/etc/system/default/limits.conf           max_documents_per_batch_save = 50000 /opt/splunk/etc/system/default/limits.conf           max_fields_per_acceleration = 10 /opt/splunk/etc/system/default/limits.conf           max_mem_usage_mb = 200 /opt/splunk/etc/system/default/limits.conf           max_queries_per_batch = 1000 /opt/splunk/etc/system/default/limits.conf           max_rows_in_memory_per_dump = 200 /opt/splunk/etc/system/default/limits.conf           max_rows_per_query = 50000 /opt/splunk/etc/system/default/limits.conf           max_size_per_batch_result_mb = 100 /opt/splunk/etc/system/default/limits.conf           max_size_per_batch_save_mb = 50 /opt/splunk/etc/system/default/limits.conf           max_size_per_result_mb = 50 /opt/splunk/etc/system/default/limits.conf           max_threads_per_outputlookup = 1       ### Troubleshooting # To delete the entire SearchHistory KV Store (because maybe you inadvertently restored everything to an incorrect user, testing, or due to other shenanigans) /opt/splunk/bin/splunk clean kvstore -app system -collection SearchHistory   # To delete a user specific context in the SearchHistory KV Store (because see above) curl -k -u admin:splunk@dmin https://localhost:8089/servicesNS/<user>/system/storage/collections/data/SearchHistory -X DELETE   ### Additional Notes It was noted that restoring for a user that has not logged in yet may report messages similar to "Action forbidden".  To remedy this, you might be able to create a local user and then restore again. 
Hi, I am deploying Splunk Enterprise and will eventually be forwarding Check Point Firewall logs using Check Point's Log Exporter. Check Point provides the option to select "Syslog" or "Splunk" as t... See more...
Hi, I am deploying Splunk Enterprise and will eventually be forwarding Check Point Firewall logs using Check Point's Log Exporter. Check Point provides the option to select "Syslog" or "Splunk" as the log format (there are some other formats as well); I will choose "Splunk". I need to know how to configure Splunk Enterprise to receive encrypted traffic from Check Point if I use TLS at the Check Point to send encrypted traffic to Splunk. Can someone enlighten me on this please?   Thanks!
My effective daily volume was set to 3072 MB due to 3 License usage of 1024 MB each, but these were about to expire so we add other 3 1024MB license, But when the other 3 license expired the Effecti... See more...
My effective daily volume was set to 3072 MB due to 3 License usage of 1024 MB each, but these were about to expire so we add other 3 1024MB license, But when the other 3 license expired the Effective daily volume was set from 6144MB to 1024MB instead of 3072MB Does anyone know why it is not taking the 3 licenses correctly? even removing the 3 expired ones is limited to 1024MB only Even if you restart Splunk it's the same result
Hi everyone, I'm trying to personalize the "configuration" tab of my app generated by add-on builder. By default when we try to add an account, we enter the Account Name / Username / Password. Fir... See more...
Hi everyone, I'm trying to personalize the "configuration" tab of my app generated by add-on builder. By default when we try to add an account, we enter the Account Name / Username / Password. Firstly I would simply like to change the labels linked to Username and Password to replace them with Client ID and Client Secret. (and secondly add the Tenant ID field). I achieved this by editing the file in $SPLUNK_HOME/etc/apps/<my_app>/appserver/static/js/build/globalConfig.json. Then I incremented the version number in the app properties. (as shown in this post https://community.splunk.com/t5/Getting-Data-In/Splunk-Add-on-Builder-Global-Account-settings/m-p/570565) However when I make new modifications elsewhere in my app, the globalConfig.json file is reset to its default values. Do you know how to do this? Splunk Version : 9.2.1 Add-On Builder version : 4.3.0 Thanks
I am confused on why I only get _ID's from my Salesforce ingest, for example, I am not getting Username, Profile Name, Dashboard Name, Report Names...etc...I am getting the User ID, Profile ID, Dashb... See more...
I am confused on why I only get _ID's from my Salesforce ingest, for example, I am not getting Username, Profile Name, Dashboard Name, Report Names...etc...I am getting the User ID, Profile ID, Dashboard ID, and so fourth, it makes searches really difficult...How am I to correlate the ID to readable relevant information.? Where User_ID equates to Username (Davey Jones)? Help Please. 
I am looking to build an alert that sends an email if someone locks themselves out of their Windows account after so many attempts.  I built a search out but when using real time I was bombarded with... See more...
I am looking to build an alert that sends an email if someone locks themselves out of their Windows account after so many attempts.  I built a search out but when using real time I was bombarded with emails every few minutes which doesn't seem right.  I would to have the alert be setup such as if user x types the password in 3 times wrong and AD locks the account, send an email to y's email address.  Real time alerting seems to be what I would need but it bombards me way too much.
Has anyone figured out how to run powershell only at scheduled time? In addition to scheduled time, it is running everytime the forwarder is restarted.
Team, I am bit new to Splunk, need help to pull ERR message from below sample raw data.  {"hosting_environment": "nonp", "application_environment": "nonp", "message": "[20621] 2024/11/14 12:39:46... See more...
Team, I am bit new to Splunk, need help to pull ERR message from below sample raw data.  {"hosting_environment": "nonp", "application_environment": "nonp", "message": "[20621] 2024/11/14 12:39:46.899958 [ERR] 10.25.1.2:30080 - pid:96866" - unable to connect to endpoint , "service": "hello world"}   Thanks!
Hi Team,   I have a splunk query that am testing for Service Now data extract.   index=snow "INC783" | search dv_state="In Progress" OR dv_state="New" OR dv_state="On Hold" | stats max(_time) as ... See more...
Hi Team,   I have a splunk query that am testing for Service Now data extract.   index=snow "INC783" | search dv_state="In Progress" OR dv_state="New" OR dv_state="On Hold" | stats max(_time) as Time latest(dv_state) as State by number, dv_priority | fieldformat Time=strftime(Time,"%Y-%m-%d %H:%M:%S") | table number,Time, dv_priority, State     The challenge with the code is, above output is listing all the states for the particular Incidnet, even when i tried to filter for only the latest and max time. number Time dv_priority State INC783 2024-11-13 16:56:14 1 - Critical In Progress INC783 2024-11-13 17:00:03 3 - Moderate On Hold   The data must only show the latest one, which must be the one with "On Hold". Tried multiple method, but failing and showing all. how can i achieve it.   thanks Jerin V
Hi, I have deployed AppServerAgent-1.8-24.9.0.36347 with following steps but agent is showing errors and not getting uploaded.  ENV APPDYNAMICS_CONTROLLER_HOST_NAME=" abc" ENV APPDYNAMICS_CONTROLL... See more...
Hi, I have deployed AppServerAgent-1.8-24.9.0.36347 with following steps but agent is showing errors and not getting uploaded.  ENV APPDYNAMICS_CONTROLLER_HOST_NAME=" abc" ENV APPDYNAMICS_CONTROLLER_PORT="443" ENV APPDYNAMICS_AGENT_ACCOUNT_NAME="abc" ENV APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY="your-access-key"  point. ENVAPPDYNAMICS_AGENT_APPLICATION_NAME=" -----------" ENV APPDYNAMICS_AGENT_TIER_NAME=" ----------- "ENV APPDYNAMICS_AGENT_NODE_NAME="your-node name" agent is not getting loaded and giving error: Error: it is because if application name is not get resolved or do i need to make some other configurations? 024-11-14 13:02:18 OpenJDK 64-Bit Server VM warning: Sharing is only supported for boot loader classes because bootstrap classpath has been appended 2024-11-14 13:02:18 Java 9+ detected, booting with Java9Util enabled. 2024-11-14 13:02:18 Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_APPLICATION_NAME] for application name [measurement-api-service-637966829491.us-east1.run.app] 2024-11-14 13:02:18 Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_TIER_NAME] for tier name [measurement-api-service-637966829491.us-east1.run.app] 2024-11-14 13:02:18 Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_NODE_NAME] for node name [measurement-api-service-637966829491.us-east1.run.app] 2024-11-14 13:02:19 Full Agent Registration Info Resolver using selfService [false] 2024-11-14 13:02:19 Full Agent Registration Info Resolver using selfService [false] 2024-11-14 13:02:19 Full Agent Registration Info Resolver using ephemeral node setting [false] 2024-11-14 13:02:19 Full Agent Registration Info Resolver using application name [measurement-api-service-637966829491.us-east1.run.app] 2024-11-14 13:02:19 Full Agent Registration Info Resolver using tier name [measurement-api-service-637966829491.us-east1.run.app] 2024-11-14 13:02:19 Full Agent Registration Info Resolver using node name [measurement-api-service-637966829491.us-east1.run.app] 2024-11-14 13:02:19 Install Directory resolved to[/opt/appdynamics] 2024-11-14 13:02:19 getBootstrapResource not available on ClassLoader 2024-11-14 13:02:20 Class with name [com.ibm.lang.management.internal.ExtendedOperatingSystemMXBeanImpl] is not available in classpath, so will ignore export access. 2024-11-14 13:02:20 Class with name [jdk.internal.util.ReferencedKeySet] is not available in classpath, so will ignore export access. 2024-11-14 13:02:20 Failed to locate module [jdk.jcmd] 2024-11-14 13:02:20 Failed to locate module [jdk.attach] 2024-11-14 13:02:23 [AD Agent init] Thu Nov 14 07:32:23 GMT 2024[DEBUG]: JavaAgent - Setting AgentClassLoader as Context ClassLoader 2024-11-14 13:02:25 [AD Agent init] Thu Nov 14 07:32:25 GMT 2024[INFO]: JavaAgent - Low Entropy Mode: Attempting to swap to non-blocking PRNG algorithm 2024-11-14 13:02:25 [AD Agent init] Thu Nov 14 07:32:25 GMT 2024[INFO]: JavaAgent - UUIDPool size is 10 2024-11-14 13:02:25 Agent conf directory set to [/opt/appdynamics/ver24.9.0.36347/conf] 2024-11-14 13:02:25 [AD Agent init] Thu Nov 14 07:32:25 GMT 2024[INFO]: JavaAgent - Agent conf directory set to [/opt/appdynamics/ver24.9.0.36347/conf] 2024-11-14 13:02:25 [AD Agent init] Thu Nov 14 07:32:25 GMT 2024[DEBUG]: AgentInstallManager - Full Agent Registration Info Resolver is running 2024-11-14 13:02:25 [AD Agent init] Thu Nov 14 07:32:25 GMT 2024[INFO]: AgentInstallManager - Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_APPLICATION_NAME] for application name [measurement-api-service-637966829491.us-east1.run.app] 2024-11-14 13:02:25 [AD Agent init] Thu Nov 14 07:32:25 GMT 2024[INFO]: AgentInstallManager - Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_TIER_NAME] for tier name [measurement-api-service-637966829491.us-east1.run.app] 2024-11-14 13:02:25 [AD Agent init] Thu Nov 14 07:32:25 GMT 2024[INFO]: AgentInstallManager - Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_NODE_NAME] for node name [measurement-api-service-637966829491.us-east1.run.app] 2024-11-14 13:02:26 [AD Agent init] Thu Nov 14 07:32:26 GMT 2024[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using selfService [false] 2024-11-14 13:02:26 [AD Agent init] Thu Nov 14 07:32:26 GMT 2024[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using selfService [false] 2024-11-14 13:02:26 [AD Agent init] Thu Nov 14 07:32:26 GMT 2024[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using ephemeral node setting [false] 2024-11-14 13:02:26 [AD Agent init] Thu Nov 14 07:32:26 GMT 2024[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using application name [measurement-api-service-637966829491.us-east1.run.app] 2024-11-14 13:02:26 [AD Agent init] Thu Nov 14 07:32:26 GMT 2024[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using tier name [measurement-api-service-637966829491.us-east1.run.app] 2024-11-14 13:02:26 [AD Agent init] Thu Nov 14 07:32:26 GMT 2024[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using node name [measurement-api-service-637966829491.us-east1.run.app] 2024-11-14 13:02:26 [AD Agent init] Thu Nov 14 07:32:26 GMT 2024[DEBUG]: AgentInstallManager - Full Agent Registration Info Resolver finished running 2024-11-14 13:02:26 [AD Agent init] Thu Nov 14 07:32:26 GMT 2024[INFO]: AgentInstallManager - Agent runtime directory set to [/opt/appdynamics/ver24.9.0.36347] 2024-11-14 13:02:26 [AD Agent init] Thu Nov 14 07:32:26 GMT 2024[INFO]: AgentInstallManager - Agent node directory set to [measurement-api-service-637966829491.us-east1.run.app] 2024-11-14 13:02:26 Agent runtime conf directory set to /opt/appdynamics/ver24.9.0.36347/conf 2024-11-14 13:02:26 [AD Agent init] Thu Nov 14 07:32:26 GMT 2024[INFO]: AgentInstallManager - Agent runtime conf directory set to /opt/appdynamics/ver24.9.0.36347/conf 2024-11-14 13:02:26 [AD Agent init] Thu Nov 14 07:32:26 GMT 2024[INFO]: JavaAgent - JDK Compatibility: 1.8+ 2024-11-14 13:02:26 [AD Agent init] Thu Nov 14 07:32:26 GMT 2024[INFO]: JavaAgent - Using Java Agent Version [Server Agent #24.9.0.36347 v24.9.0 GA compatible with 4.4.1.0 r8d55fc625d557c4c19f7cf97088ecde0a2e43e82 release/24.9.0] 2024-11-14 13:02:26 [AD Agent init] Thu Nov 14 07:32:26 GMT 2024[INFO]: JavaAgent - Running IBM Java Agent [No] Thanks & Regards Rupinder Kaur 
Hello experts. I'm a Splunk newbie. I am using the Jira Service Desk simple AddOn to send Splunk alarms to Jira tickets. We also confirmed that Splunk alarms were successfully raised through Jira... See more...
Hello experts. I'm a Splunk newbie. I am using the Jira Service Desk simple AddOn to send Splunk alarms to Jira tickets. We also confirmed that Splunk alarms were successfully raised through Jira tickets. However, sometimes it is the same alarm, but a specific alarm receives the customfield value well, but there are cases where no value is retrieved.   In Splunk, it is confirmed that the value exists, but the value cannot be retrieved. No matter how much I searched, I couldn't find out what the reason was. If the Jira and Splunk field mappings are incorrect, you shouldn't be able to get the value from all tickets, but you can't get it from a specific ticket... What's the problem? Example here... The Client value is always a value. However, as shown in the images, the customer value does not exist.  [Error Ticket]   [Normal Ticket]        
hi guys After adding [clustering] stanza and [replication_port://9887] in indexer cluster,  getting the below error ERROR  - Waiting for the cluster manager to come up... (retrying every second),bu... See more...
hi guys After adding [clustering] stanza and [replication_port://9887] in indexer cluster,  getting the below error ERROR  - Waiting for the cluster manager to come up... (retrying every second),but  service is running, but it stuck at this point - Waiting for web server at http://<ip>:8000 to be available. later, got this warning, WARNING: web interface does not seem to be available! how to I fix this issue ? 8000,9887 ports are already open, I've passed same pass4SymmKey for all 3 servers in indexer cluster tags: @any  
Our add-on requires dateparser and within dateparser the regex library is required. I added the libraries to my addon (built with Add-on Builder 4.3) however, it fails in Splunk 9.2 with: File "/opt... See more...
Our add-on requires dateparser and within dateparser the regex library is required. I added the libraries to my addon (built with Add-on Builder 4.3) however, it fails in Splunk 9.2 with: File "/opt/splunk/etc/apps/.../regex/_regex_core.py", line 21, in <module> import regex._regex as _regex ModuleNotFoundError: No module named 'regex._regex' If I try on Splunk 9.3 it works fine.  I know Python version changed from 3.7 to 3.9 on Splunk 9.3 but the regex version 2024.4.16 seems to be good for Python 3.7 I will appreciate any insight on how to solve this issue.
Hi everyone, I’m working with Splunk IT Service Intelligence (ITSI) and want to automate the creation of maintenance windows using a scheduled search in SPL. Ideally, I’d like to use the rest comman... See more...
Hi everyone, I’m working with Splunk IT Service Intelligence (ITSI) and want to automate the creation of maintenance windows using a scheduled search in SPL. Ideally, I’d like to use the rest command within SPL to define a maintenance window, assign specific entities and services to it, and have it run on a schedule. Is it possible to set up maintenance windows with entities and services directly from SPL? If anyone has sample SPL code or guidance on setting up automated maintenance windows, it would be very helpful! Thanks in advance!