All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Please share your knowledge. Splunk 9.4 reference https://docs.splunk.com/Documentation/Splunk/9.4.2/Admin/Serverconf I'm trying to set SHC replication to mTLS, but it's not working. Alerts crea... See more...
Please share your knowledge. Splunk 9.4 reference https://docs.splunk.com/Documentation/Splunk/9.4.2/Admin/Serverconf I'm trying to set SHC replication to mTLS, but it's not working. Alerts created in Splunk Web are being replicated. I'm using a self-signed certificate. search-head-1,search-head-2,search-head-3のsplunkd.log"port 9887 with SSL"is output. 08-06-2025 08:05:34.894 +0000 INFO TcpInputProc [148404 TcpListener] - Creating replication data Acceptor for IPv4 port 9887 with SSL However, "useSSL=false" is output to all Search Heads. 08-08-2025 02:41:30.425 +0000 INFO SHCRepJob [21691 SHPPushExecutorWorker-0] - Running job=SHPRepJob peer="search-head-2", guid="A5CDBF4C-7F71-4705-9E20-10529800C25E" aid=scheduler__nobody_U3BsdW5rX1NBX0NJTQ__RMD5fe51f0ad1d9fe444_at_1754620680_13_A5CDBF4C-7F71-4705-9E20-10529800C25E, tgtPeer="search-head-1", tgtGuid="79BB42FF-7436-4966-B8C8-951EEF67C1AD", tgtRP=9887, useSSL=false The correct response is returned with the openssl command. The created self-signed certificate is also used on 8000 and 8089. $ sudo openssl s_client \ -connect <host IP>:9887 \ -CAfile /opt/splunk/etc/auth/mycerts/<myRootCA>.pem \ -cert /opt/splunk/etc/auth/mycerts/<mycert>.pem \ -key /opt/splunk/etc/auth/mycerts/<mykey>.key Verify return code: 0 (ok)   # /opt/splunk/etc/system/local/server.conf [sslConfig] enableSplunkdSSL = true sslRootCAPath = /opt/splunk/etc/auth/mycerts/<myRootCA.pem> serverCert = /opt/splunk/etc/auth/mycerts/<combined certificate.pem> requireClientCert = true sslVersions = tls1.2 sslCommonNameToCheck = <search-head-1>,<search-head-2>,<search-head-3>,・・・ sslPassword = <RootCR password>   [replication_port://9887] [replication_port-ssl://9887] disabled = false serverCert = /opt/splunk/etc/auth/mycerts/<combined certificate.pem> requireClientCert = true sslVersions = tls1.2 sslCommonNameToCheck = <search-head-1>,<search-head-2>,<search-head-3> I use Google Translate to translate Japanese into English.
We are looking for Power Platform audit logs to ensure that these logs will automatically show up in SPLUNK if they are available in Purview.
We upgraded from 9.4.3 to 10.0 and now all the splunk forwarders are crashing because of the splunk-winevtlog service. How can I fix this?  is there a fix? Is anyone else experiencing these issues?  ... See more...
We upgraded from 9.4.3 to 10.0 and now all the splunk forwarders are crashing because of the splunk-winevtlog service. How can I fix this?  is there a fix? Is anyone else experiencing these issues?  I have had to disable all splunk instances because the service is a memory leak. 
There was same kind of discussion on slack side some times ago. Maybe this can leads you into correct way? https://splunkcommunity.slack.com/archives/CD9CL5WJ3/p1727111432487429
This is the screenshot the user sent - just trying to share Read/Write with other xxx-power users and they don't have the permissions.
Hi @livehybrid,   Thanks for your responses. Can confirm xxx-power has write access into the app: The user that is attempting to share these saved searches in an xxx-power user, so how would i... See more...
Hi @livehybrid,   Thanks for your responses. Can confirm xxx-power has write access into the app: The user that is attempting to share these saved searches in an xxx-power user, so how would it be that only the xxx-admin user can share the search if it was created by xxx-power?
Hey @DanielPriceUK @dmoberg , I think the feature is not into production yet as confirmed by Elizabeth Li on the following community page - https://community.splunk.com/t5/Dashboards-Visualizations/... See more...
Hey @DanielPriceUK @dmoberg , I think the feature is not into production yet as confirmed by Elizabeth Li on the following community page - https://community.splunk.com/t5/Dashboards-Visualizations/Dashboard-Studio-How-to-hide-export-and-full-screen-that-come-up/m-p/688222. You can also try the workaround provided in the same post. I haven't tried it....you can test it out.   Thanks, Tejas.
Hello @hettervik, From the scenario, it seems that collect is the only way to achieve your use case. You'll have to try filtering out the events you don't need and better optimize the SPL search and... See more...
Hello @hettervik, From the scenario, it seems that collect is the only way to achieve your use case. You'll have to try filtering out the events you don't need and better optimize the SPL search and use the collect command so that you do not miss the required events. However, if you want to migrate the buckets, I've found one of the older community posts that might help you - https://community.splunk.com/t5/Installation/Is-it-possible-to-migrate-indexed-buckets-to-a-different-index/td-p/91085. But I would be quite cautious while trying this approach. Haven't tried it myself. But copying the buckets might bring unwanted data to the new index. You can test it out with one of the smaller buckets and test if you achieve the desired result or not. IMO, collect is the best way to move forward. You can use the following SPL query to keep the original parsing configuration index = old_index | <<filter out the events required>> | fields host source sourcetype _time _raw | collect index=new_index output_format=hec   Thanks, Tejas.   --- If the above solution helps, an upvote is appreciated.!!  
Hi @livehybrid , Thanks a lot for getting back; I got it fixed. Resolution: Updated port details in the URL.
Hi Community, I'm in the middle of installing different Apps in SOAR; done with BMC Helix however unable to find tarball for Ansible. Has anyone installed Ansible in SOAR? Can you please help from w... See more...
Hi Community, I'm in the middle of installing different Apps in SOAR; done with BMC Helix however unable to find tarball for Ansible. Has anyone installed Ansible in SOAR? Can you please help from where I can download and install it? An SOP would be helpful.   Splunk App for SOAR Ansible Monitoring and Diagnostics Ansible Tower Playbooks-On-Rails App (powered by Ansible) 
We have an index with a ton of data. A new use for the data has emerged, so now we want a longer retention time on some of the data in the index. We don't want to simply increase the retention time o... See more...
We have an index with a ton of data. A new use for the data has emerged, so now we want a longer retention time on some of the data in the index. We don't want to simply increase the retention time on the index, because the storage cost is too high. We want to create a new index with a longer retention, pick out the events we need, and copy them to the new index. This is on an indexer cluster. In theory, we could use collect, like this: index=oldindex field=the_events_we_need | collect index=newindex However, because the index is too big, we're having problems running this search. Even though we run it bit-by-bit, still we end up missing events in the new index. Could be due to performance or memory limits, or bucket issues. Is there a better and more reliable way of doing this?
So apparently you have two different event formats received from the same source, right? One - and this one is properly parsed - contains both an absolute timestamp as well as timezone offset. The o... See more...
So apparently you have two different event formats received from the same source, right? One - and this one is properly parsed - contains both an absolute timestamp as well as timezone offset. The other one contains only time without a timezone definition so depending on your SC4S/Splunk configuration might simply treat the timestamp as GMT and apply the +7:00 offset to it. I'm not an expert on SC4S but AFAIR it expects a single type of events for a single source so to "split" your processing path you need to do some additional conditional routing in the underlying syslog-ng configuration.
I have installed and configured the DB connect under my deployer. Added identities and connections. Then copied the etc/apps/splunk_app_db_connect to /etc/shcluster/apps/ and pushed the bundle to sh... See more...
I have installed and configured the DB connect under my deployer. Added identities and connections. Then copied the etc/apps/splunk_app_db_connect to /etc/shcluster/apps/ and pushed the bundle to shcluster.  As per doc: https://help.splunk.com/en/splunk-cloud-platform/connect-relational-databases/deploy-and-use-splunk-db-connect/4.0/install-splunk-db-connect/install-and-configure-splunk-db-connect-on-a-splunk-enterprise-on-premise-distributed-platform-deployment App is deployed but identity.dat file is generated every 30s on sh members and that is different than on my deployer. DB Connect GUI on SH members gives me an error: "Identity password is corrupted." What did I miss?
Yes, this is really needed. It can be done in the Classic dashboards, but we also need this for Studio.
Hello, Here is the raw local log of FortiAnalyzer, its timezone is also GMT+7 I checked the logs from FortiGate, which are forwarded to FortiAnalyzer and then to Splunk. When comparing these wi... See more...
Hello, Here is the raw local log of FortiAnalyzer, its timezone is also GMT+7 I checked the logs from FortiGate, which are forwarded to FortiAnalyzer and then to Splunk. When comparing these with the local logs of FortiAnalyzer, I noticed a key difference: the FortiGate logs contain timestamp, eventtime, and timezone, while the local FortiAnalyzer logs only show time.
Hi @silverKi  6.0.2 of the TA is no longer available on Splunkbase as its now a really old version - Even if you do obtain it there is no guarantee that it will support 9.2.0 because 6.0.2 of the TA... See more...
Hi @silverKi  6.0.2 of the TA is no longer available on Splunkbase as its now a really old version - Even if you do obtain it there is no guarantee that it will support 9.2.0 because 6.0.2 of the TA was released 6 years ago, way before Splunk 9.2.x!  I did find https://github.com/it-kombinat/Splunk_TA_nix which is a 3rd party copy of the TA at version 6.0.2 - however this is not a trusted source so you might be better contacting support via https://www.splunk.com/en_us/about-splunk/contact-us.html#customer-support or https://www.splunk.com/support  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
As those are not generally available in splunkbase and those are made by splunk, you could try to get those by asking from splunk support.
splunk how to get splunk add-on for unix and linux 9.2.0 version and 6.0.2 version ..??
@LOP22456  The error highlights that only the admin role currently has write access to the specific saved search. Looks like the saved search permission need to be set correctly. So go to your save... See more...
@LOP22456  The error highlights that only the admin role currently has write access to the specific saved search. Looks like the saved search permission need to be set correctly. So go to your saved search->Edit Perimssion->Change Write access to include XXX-power Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@SN1  stats latest(SensorHealthState) by DeviceName is far more efficient than dedup, especially when you're only interested in the most recent state. It reduces the dataset early and avoids unneces... See more...
@SN1  stats latest(SensorHealthState) by DeviceName is far more efficient than dedup, especially when you're only interested in the most recent state. It reduces the dataset early and avoids unnecessary processing. Try below, index=endpoint_defender source="AdvancedHunting-DeviceInfo" (DeviceType=Workstation OR DeviceType=Server) | stats latest(SensorHealthState) as SensorHealthState latest(_time) as _time by DeviceName | search SensorHealthState IN ("active", "Inactive", "Misconfigured", "Impaired communications", "No sensor data") | rex field=DeviceDynamicTags "\"(?<code>(?!/LINUX)[A-Z]+)\"" | rex field=DeviceName "^(?<Hostname>[^.]+)" | lookup lkp-GlobalIpRange.csv code OUTPUT "Company Code", Region | eval Region=mvindex(Region, 0) | search DeviceName="bie-n1690.emea.duerr.int" | table Hostname code "Company Code" DeviceName _time Region SensorHealthState Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!