All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Good afternoon, I am very new to AppDynamics and have a lot of questions. I am in the middle of setting up on-premises, self-hosted virtual appliances directly to ESXi hosts. I have the hosts and ... See more...
Good afternoon, I am very new to AppDynamics and have a lot of questions. I am in the middle of setting up on-premises, self-hosted virtual appliances directly to ESXi hosts. I have the hosts and am attempting to configure them and have the following questions. I've also reviewed the online documentation for securing the platform, but do not see any references to any of my questions.  https://docs.appdynamics.com/appd/onprem/24.x/25.2/en/secure-the-platform Where may I find information for the following? 1.  Enabling FIPS and its related settings? 2.  Enabling FIPS and its related settings on private synthetic agents? 3.  How to set up smart card authentication? Any information you may provide is greatly appreciated. Thank you, Stuart
I figured out the issue.  It was a permissions issue.  I needed to put splunkfwd on the appropriate access lists.  I gave splunkfwd read access to /var/log/audit/audit.log and execute access to /var/... See more...
I figured out the issue.  It was a permissions issue.  I needed to put splunkfwd on the appropriate access lists.  I gave splunkfwd read access to /var/log/audit/audit.log and execute access to /var/log/audit.  Now splunkfwd can execute the script either manually from command line or as a scheduled scripted input run by Splunk UF.  In both cases, the script runs without error whether there is a pre-existing checkpoint in place or not. I understand that Splunk UF has the CAP_DAC_READ_SEARCH capability which allows it to read files it normally wouldn't have access to.  What I don't understand is why that capability worked fine when I asked it to generate the initial checkpoint, but then suddenly stopped working the moment that I asked it to use a pre-existing checkpoint.  Is it possible that the CAP_DAC_READ_SEARCH capability doesn't extend to reading the inode properties of each file?  If that were the case, it would explain why the initial ausearch went fine (when inode doesn't matter because ausearch is just ingesting all of the audit.log files regardless of inode), but then when ausearch needs to look for the specific audit.log file that matches the inode listed in the checkpoint file, it can't do so.   Thank you to @PickleRick and @isoutamo for your suggestions and assistance.  I couldn't have done it without you both.
I needed to preface the view name with /app/SplunkEnterpriseSecuritySuite/ Sample: Investigate Identity Artifacts - "/app/SplunkEnterpriseSecuritySuite/ident_by_name" Investigate Asset Artifacts... See more...
I needed to preface the view name with /app/SplunkEnterpriseSecuritySuite/ Sample: Investigate Identity Artifacts - "/app/SplunkEnterpriseSecuritySuite/ident_by_name" Investigate Asset Artifacts - "/app/SplunkEnterpriseSecuritySuite/asset_artifacts" Investigate File/Process Artifacts - "/app/SplunkEnterpriseSecuritySuite/file_artifacts"
What is the best app to detect unused data? any suggestions?
Has anyone had any luck getting Open AI Compliance API logs into Splunk Cloud? This API ships logs that provide visibility into prompts / replies with Chat GPT. Looking to ingest this data to monitor... See more...
Has anyone had any luck getting Open AI Compliance API logs into Splunk Cloud? This API ships logs that provide visibility into prompts / replies with Chat GPT. Looking to ingest this data to monitor for possible sensitive / confidential data being uploaded. Open AI has built in integrations with several applications https://help.openai.com/en/articles/9261474-compliance-api-for-enterprise-customers. Surprisingly, Splunk is not one of these applications. My question is, has anyone had any luck getting these logs into Splunk. I have the API key from Open AI - but I'm struggling with creating a solution to ingest these logs into Splunk - and honestly surprised their isn't a native application built by Splunk for this. 
Disabled the  evt_resolve_ad_obj = 0 in Splunk_TA_windows app , logs have now ceased.  
In a Splunk dashboard, I’m using the custom visualization "3D Graph Network Topology Viz". The goal is that when clicking on a node, a token is set so another panel can display related details. The... See more...
In a Splunk dashboard, I’m using the custom visualization "3D Graph Network Topology Viz". The goal is that when clicking on a node, a token is set so another panel can display related details. The issue is: When configuring On Click → Manage tokens on this dashboard, Splunk shows the message: "This custom visualization might not support drilldown behavior." When clicking on a node, the $click.value$ token does not update and literally remains as $click.value$, which confirms that it’s not sending dynamic values. The only token that actually receives data is $click.name$, which returns the node’s name, but not other values I’d like to capture. Has anyone successfully implemented full drilldown support in this visualization or knows how to extend it so that more tokens (like $click.value$) can be populated when clicking on a node?
Please share your knowledge. Splunk 9.4 reference https://docs.splunk.com/Documentation/Splunk/9.4.2/Admin/Serverconf I'm trying to set SHC replication to mTLS, but it's not working. Alerts crea... See more...
Please share your knowledge. Splunk 9.4 reference https://docs.splunk.com/Documentation/Splunk/9.4.2/Admin/Serverconf I'm trying to set SHC replication to mTLS, but it's not working. Alerts created in Splunk Web are being replicated. I'm using a self-signed certificate. search-head-1,search-head-2,search-head-3のsplunkd.log"port 9887 with SSL"is output. 08-06-2025 08:05:34.894 +0000 INFO TcpInputProc [148404 TcpListener] - Creating replication data Acceptor for IPv4 port 9887 with SSL However, "useSSL=false" is output to all Search Heads. 08-08-2025 02:41:30.425 +0000 INFO SHCRepJob [21691 SHPPushExecutorWorker-0] - Running job=SHPRepJob peer="search-head-2", guid="A5CDBF4C-7F71-4705-9E20-10529800C25E" aid=scheduler__nobody_U3BsdW5rX1NBX0NJTQ__RMD5fe51f0ad1d9fe444_at_1754620680_13_A5CDBF4C-7F71-4705-9E20-10529800C25E, tgtPeer="search-head-1", tgtGuid="79BB42FF-7436-4966-B8C8-951EEF67C1AD", tgtRP=9887, useSSL=false The correct response is returned with the openssl command. The created self-signed certificate is also used on 8000 and 8089. $ sudo openssl s_client \ -connect <host IP>:9887 \ -CAfile /opt/splunk/etc/auth/mycerts/<myRootCA>.pem \ -cert /opt/splunk/etc/auth/mycerts/<mycert>.pem \ -key /opt/splunk/etc/auth/mycerts/<mykey>.key Verify return code: 0 (ok)   # /opt/splunk/etc/system/local/server.conf [sslConfig] enableSplunkdSSL = true sslRootCAPath = /opt/splunk/etc/auth/mycerts/<myRootCA.pem> serverCert = /opt/splunk/etc/auth/mycerts/<combined certificate.pem> requireClientCert = true sslVersions = tls1.2 sslCommonNameToCheck = <search-head-1>,<search-head-2>,<search-head-3>,・・・ sslPassword = <RootCR password>   [replication_port://9887] [replication_port-ssl://9887] disabled = false serverCert = /opt/splunk/etc/auth/mycerts/<combined certificate.pem> requireClientCert = true sslVersions = tls1.2 sslCommonNameToCheck = <search-head-1>,<search-head-2>,<search-head-3> I use Google Translate to translate Japanese into English.
We are looking for Power Platform audit logs to ensure that these logs will automatically show up in SPLUNK if they are available in Purview.
We upgraded from 9.4.3 to 10.0 and now all the splunk forwarders are crashing because of the splunk-winevtlog service. How can I fix this?  is there a fix? Is anyone else experiencing these issues?  ... See more...
We upgraded from 9.4.3 to 10.0 and now all the splunk forwarders are crashing because of the splunk-winevtlog service. How can I fix this?  is there a fix? Is anyone else experiencing these issues?  I have had to disable all splunk instances because the service is a memory leak. 
There was same kind of discussion on slack side some times ago. Maybe this can leads you into correct way? https://splunkcommunity.slack.com/archives/CD9CL5WJ3/p1727111432487429
This is the screenshot the user sent - just trying to share Read/Write with other xxx-power users and they don't have the permissions.
Hi @livehybrid,   Thanks for your responses. Can confirm xxx-power has write access into the app: The user that is attempting to share these saved searches in an xxx-power user, so how would i... See more...
Hi @livehybrid,   Thanks for your responses. Can confirm xxx-power has write access into the app: The user that is attempting to share these saved searches in an xxx-power user, so how would it be that only the xxx-admin user can share the search if it was created by xxx-power?
Hey @DanielPriceUK @dmoberg , I think the feature is not into production yet as confirmed by Elizabeth Li on the following community page - https://community.splunk.com/t5/Dashboards-Visualizations/... See more...
Hey @DanielPriceUK @dmoberg , I think the feature is not into production yet as confirmed by Elizabeth Li on the following community page - https://community.splunk.com/t5/Dashboards-Visualizations/Dashboard-Studio-How-to-hide-export-and-full-screen-that-come-up/m-p/688222. You can also try the workaround provided in the same post. I haven't tried it....you can test it out.   Thanks, Tejas.
Hello @hettervik, From the scenario, it seems that collect is the only way to achieve your use case. You'll have to try filtering out the events you don't need and better optimize the SPL search and... See more...
Hello @hettervik, From the scenario, it seems that collect is the only way to achieve your use case. You'll have to try filtering out the events you don't need and better optimize the SPL search and use the collect command so that you do not miss the required events. However, if you want to migrate the buckets, I've found one of the older community posts that might help you - https://community.splunk.com/t5/Installation/Is-it-possible-to-migrate-indexed-buckets-to-a-different-index/td-p/91085. But I would be quite cautious while trying this approach. Haven't tried it myself. But copying the buckets might bring unwanted data to the new index. You can test it out with one of the smaller buckets and test if you achieve the desired result or not. IMO, collect is the best way to move forward. You can use the following SPL query to keep the original parsing configuration index = old_index | <<filter out the events required>> | fields host source sourcetype _time _raw | collect index=new_index output_format=hec   Thanks, Tejas.   --- If the above solution helps, an upvote is appreciated.!!  
Hi @livehybrid , Thanks a lot for getting back; I got it fixed. Resolution: Updated port details in the URL.
Hi Community, I'm in the middle of installing different Apps in SOAR; done with BMC Helix however unable to find tarball for Ansible. Has anyone installed Ansible in SOAR? Can you please help from w... See more...
Hi Community, I'm in the middle of installing different Apps in SOAR; done with BMC Helix however unable to find tarball for Ansible. Has anyone installed Ansible in SOAR? Can you please help from where I can download and install it? An SOP would be helpful.   Splunk App for SOAR Ansible Monitoring and Diagnostics Ansible Tower Playbooks-On-Rails App (powered by Ansible) 
We have an index with a ton of data. A new use for the data has emerged, so now we want a longer retention time on some of the data in the index. We don't want to simply increase the retention time o... See more...
We have an index with a ton of data. A new use for the data has emerged, so now we want a longer retention time on some of the data in the index. We don't want to simply increase the retention time on the index, because the storage cost is too high. We want to create a new index with a longer retention, pick out the events we need, and copy them to the new index. This is on an indexer cluster. In theory, we could use collect, like this: index=oldindex field=the_events_we_need | collect index=newindex However, because the index is too big, we're having problems running this search. Even though we run it bit-by-bit, still we end up missing events in the new index. Could be due to performance or memory limits, or bucket issues. Is there a better and more reliable way of doing this?
So apparently you have two different event formats received from the same source, right? One - and this one is properly parsed - contains both an absolute timestamp as well as timezone offset. The o... See more...
So apparently you have two different event formats received from the same source, right? One - and this one is properly parsed - contains both an absolute timestamp as well as timezone offset. The other one contains only time without a timezone definition so depending on your SC4S/Splunk configuration might simply treat the timestamp as GMT and apply the +7:00 offset to it. I'm not an expert on SC4S but AFAIR it expects a single type of events for a single source so to "split" your processing path you need to do some additional conditional routing in the underlying syslog-ng configuration.
I have installed and configured the DB connect under my deployer. Added identities and connections. Then copied the etc/apps/splunk_app_db_connect to /etc/shcluster/apps/ and pushed the bundle to sh... See more...
I have installed and configured the DB connect under my deployer. Added identities and connections. Then copied the etc/apps/splunk_app_db_connect to /etc/shcluster/apps/ and pushed the bundle to shcluster.  As per doc: https://help.splunk.com/en/splunk-cloud-platform/connect-relational-databases/deploy-and-use-splunk-db-connect/4.0/install-splunk-db-connect/install-and-configure-splunk-db-connect-on-a-splunk-enterprise-on-premise-distributed-platform-deployment App is deployed but identity.dat file is generated every 30s on sh members and that is different than on my deployer. DB Connect GUI on SH members gives me an error: "Identity password is corrupted." What did I miss?