All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello All, Thanks for the help this forum provides. I have a table in dashboard studio. I can set the backgrould color for the header. I would like to be able to change the color of the text in th... See more...
Hello All, Thanks for the help this forum provides. I have a table in dashboard studio. I can set the backgrould color for the header. I would like to be able to change the color of the text in the table header without using the dark mode which sets text color in the header to white (and the table rows too). I would like to have the text color in my  header to be white (or some other color) when the dashboard is set to the "light mode". It there a way to do this? Thanks, Eholz1
Hello Splunk Community, I’m working in the Behavioral Profiling app to create an Anomaly Scoring Rule. In the Define Indicator Source step, I have successfully selected my Behavioral Indicator (e.g... See more...
Hello Splunk Community, I’m working in the Behavioral Profiling app to create an Anomaly Scoring Rule. In the Define Indicator Source step, I have successfully selected my Behavioral Indicator (e.g., "Amount Transaction"), but the Scoring Field dropdown is disabled / showing a red mark, and I’m unable to select any value. Details: Behavioral Indicator: Amount Transaction Data is visible when I run the same SPL in Search & Reporting. Time Range: Last Day (also tried other ranges) Using the default fields from my dataset (contains account, amount, _time). The Scoring Field dropdown does not show any options. What I have tried: Verified the field exists in my data. Changed the Time Range to ensure data is available. Recreated the Behavioral Indicator. Question: What specific requirements or field types does the Scoring Field expect? Do I need to modify the Behavioral Indicator definition or SPL so that this dropdown is populated? Any guidance or examples would be greatly appreciated. Thanks in advance!   The Data that I have provided for profiling is as follows : imestamp,account,amount 2025-08-11 11:25:56,ACC1001,2500 2025-08-11 11:25:56,ACC1001,3000 2025-08-11 11:25:56,ACC1001,5000 2025-08-11 11:25:56,ACC1002,1500 2025-08-11 11:25:56,ACC1002,2000 2025-08-11 11:25:56,ACC1003,8000 2025-08-11 11:25:56,ACC1003,4000 2025-08-11 11:25:56,ACC1004,12000 2025-08-11 11:25:56,ACC1005,600 2025-08-11 11:25:56,ACC1005,750 2025-08-11 11:25:56,ACC1006,5000 2025-08-11 11:25:56,ACC1006,7000  
Hi community, I have a question on counting the number of events per values() value in stats command. For example having events with src_ip, user (and a couple of more) fields. I would like to cou... See more...
Hi community, I have a question on counting the number of events per values() value in stats command. For example having events with src_ip, user (and a couple of more) fields. I would like to count each of the user occurence in the raw log. Example as below. | stats values(user) as values_user by src_ip Example: _time user src_ip 2025-08-11 ronald 192.168.2.5 2025-08-11 jasmine  192.168.2.5 2025-08-11 tim 192.168.2.6 2025-08-11 ronald 192.168.2.5   I would like to have result as  values_user count_vaules_user src_ip ronald jasmine ronald:2 jasmine:1 192.168.2.5 tim tim:1 192.168.2.6
I am trying to learn SIEM tech and am at the stage where im trying to use/setup Splunk CIM. My pipeline uses fake logs and I am trying to get them to show up with the Authentication data model. Howev... See more...
I am trying to learn SIEM tech and am at the stage where im trying to use/setup Splunk CIM. My pipeline uses fake logs and I am trying to get them to show up with the Authentication data model. However it seems like the authentication tag is not being applied.  (files shortened ) My eventtypes.conf: [account_locked] search = sourcetype="logstream" action="failure" signature="Account locked" tags = authentication, failure, account_locked My tags.conf: [eventtype=account_locked] authentication = enabled failure = enabled account_locked = enabled and my props.conf: [logstream] TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z TIME_PREFIX = "\"_time\": \""" MAX_TIMESTAMP_LOOKAHEAD = 30 INDEXED_EXTRACTIONS = json FIELDALIAS-src_user_for_user = user AS src_user FIELDALIAS-src_for_src = src AS src FIELDALIAS-dest_for_dest = dest AS dest FIELDALIAS-app_for_app = app AS app FIELDALIAS-dest_for_dest = dest AS dest     Now what is really stumping me here is that no event types are being recognized. However, if I search for those logs by doing the command I used for the event type,  I get the results and logs I am looking for: search = sourcetype="logstream" action="failure" signature="Account locked" A couple of things I confirmed: - HEC token is correct - The Field Aliases are compliant with the  Authentication Data Model 
Hi, We’re looking for guidance on the best way to ingest FortiMail Cloud logs into Splunk Cloud. Our current environment includes: Cloud: Splunk Cloud, Fortimail Cloud - Hosted On-premise: SC4S ... See more...
Hi, We’re looking for guidance on the best way to ingest FortiMail Cloud logs into Splunk Cloud. Our current environment includes: Cloud: Splunk Cloud, Fortimail Cloud - Hosted On-premise: SC4S serve, Heavy Forwarder and FortiAnalyzer on-prem   FortiMail Cloud is hosted by Fortinet, so we can’t just point it at our SC4S like we would for an on-prem appliance. We do have the option to send logs to our on-prem FortiAnalyzer, but we’re unsure if it’s better to: Route FortiMail Cloud logs → FortiAnalyzer on-prem → SC4S/HF → Splunk Cloud, Send FortiMail Cloud logs directly to SC4S via an external connection, or Use another recommended method (e.g., Fortinet APIs, log download scheduling, etc.) Has anyone implemented a similar setup for FortiMail Cloud? Any best practices or pitfalls to avoid—especially regarding secure transport, parsing, and CIM compliance? Thanks in advance!
How enaple TLS in splunk platform from ca 
Monitor set to pull in a watched log that has no props/transforms configs applied. This would ingest the entire file contents, correct? 
  Hello everyone, I’m encountering an issue when trying to enable secure HTTPS access on Splunk Web using an SSL certificate issued by a trusted external CA. What I did: Placed the SSL certif... See more...
  Hello everyone, I’m encountering an issue when trying to enable secure HTTPS access on Splunk Web using an SSL certificate issued by a trusted external CA. What I did: Placed the SSL certificate file (splunkWeb.pem) in the following path: $SPLUNK_HOME/etc/apps/webTLS/certs/splunkWeb.pem Edited the web.conf file with the following settings:   ini CopyEdit [settings] enableSplunkWebSSL = true serverCert = $SPLUNK_HOME/etc/apps/webTLS/certs/splunkWeb.pem privKeyPath = $SPLUNK_HOME/etc/apps/webTLS/certs/splunkWeb.pem   Restarted the Splunk service. Issue: After restarting, Splunk hangs during startup and the web interface does not become available over HTTPS. Questions: Are there additional steps required when using an external SSL certificate? Is the web.conf configuration correct, especially regarding the privKeyPath pointing to the same .pem file as serverCert? Should the private key be in a separate file from the certificate? Any advice or similar experiences would be greatly appreciated. Thank you in advance for your help!
Good afternoon, I am very new to AppDynamics and have a lot of questions. I am in the middle of setting up on-premises, self-hosted virtual appliances directly to ESXi hosts. I have the hosts and ... See more...
Good afternoon, I am very new to AppDynamics and have a lot of questions. I am in the middle of setting up on-premises, self-hosted virtual appliances directly to ESXi hosts. I have the hosts and am attempting to configure them and have the following questions. I've also reviewed the online documentation for securing the platform, but do not see any references to any of my questions.  https://docs.appdynamics.com/appd/onprem/24.x/25.2/en/secure-the-platform Where may I find information for the following? 1.  Enabling FIPS and its related settings? 2.  Enabling FIPS and its related settings on private synthetic agents? 3.  How to set up smart card authentication? Any information you may provide is greatly appreciated. Thank you, Stuart
What is the best app to detect unused data? any suggestions?
Has anyone had any luck getting Open AI Compliance API logs into Splunk Cloud? This API ships logs that provide visibility into prompts / replies with Chat GPT. Looking to ingest this data to monitor... See more...
Has anyone had any luck getting Open AI Compliance API logs into Splunk Cloud? This API ships logs that provide visibility into prompts / replies with Chat GPT. Looking to ingest this data to monitor for possible sensitive / confidential data being uploaded. Open AI has built in integrations with several applications https://help.openai.com/en/articles/9261474-compliance-api-for-enterprise-customers. Surprisingly, Splunk is not one of these applications. My question is, has anyone had any luck getting these logs into Splunk. I have the API key from Open AI - but I'm struggling with creating a solution to ingest these logs into Splunk - and honestly surprised their isn't a native application built by Splunk for this. 
In a Splunk dashboard, I’m using the custom visualization "3D Graph Network Topology Viz". The goal is that when clicking on a node, a token is set so another panel can display related details. The... See more...
In a Splunk dashboard, I’m using the custom visualization "3D Graph Network Topology Viz". The goal is that when clicking on a node, a token is set so another panel can display related details. The issue is: When configuring On Click → Manage tokens on this dashboard, Splunk shows the message: "This custom visualization might not support drilldown behavior." When clicking on a node, the $click.value$ token does not update and literally remains as $click.value$, which confirms that it’s not sending dynamic values. The only token that actually receives data is $click.name$, which returns the node’s name, but not other values I’d like to capture. Has anyone successfully implemented full drilldown support in this visualization or knows how to extend it so that more tokens (like $click.value$) can be populated when clicking on a node?
Please share your knowledge. Splunk 9.4 reference https://docs.splunk.com/Documentation/Splunk/9.4.2/Admin/Serverconf I'm trying to set SHC replication to mTLS, but it's not working. Alerts crea... See more...
Please share your knowledge. Splunk 9.4 reference https://docs.splunk.com/Documentation/Splunk/9.4.2/Admin/Serverconf I'm trying to set SHC replication to mTLS, but it's not working. Alerts created in Splunk Web are being replicated. I'm using a self-signed certificate. search-head-1,search-head-2,search-head-3のsplunkd.log"port 9887 with SSL"is output. 08-06-2025 08:05:34.894 +0000 INFO TcpInputProc [148404 TcpListener] - Creating replication data Acceptor for IPv4 port 9887 with SSL However, "useSSL=false" is output to all Search Heads. 08-08-2025 02:41:30.425 +0000 INFO SHCRepJob [21691 SHPPushExecutorWorker-0] - Running job=SHPRepJob peer="search-head-2", guid="A5CDBF4C-7F71-4705-9E20-10529800C25E" aid=scheduler__nobody_U3BsdW5rX1NBX0NJTQ__RMD5fe51f0ad1d9fe444_at_1754620680_13_A5CDBF4C-7F71-4705-9E20-10529800C25E, tgtPeer="search-head-1", tgtGuid="79BB42FF-7436-4966-B8C8-951EEF67C1AD", tgtRP=9887, useSSL=false The correct response is returned with the openssl command. The created self-signed certificate is also used on 8000 and 8089. $ sudo openssl s_client \ -connect <host IP>:9887 \ -CAfile /opt/splunk/etc/auth/mycerts/<myRootCA>.pem \ -cert /opt/splunk/etc/auth/mycerts/<mycert>.pem \ -key /opt/splunk/etc/auth/mycerts/<mykey>.key Verify return code: 0 (ok)   # /opt/splunk/etc/system/local/server.conf [sslConfig] enableSplunkdSSL = true sslRootCAPath = /opt/splunk/etc/auth/mycerts/<myRootCA.pem> serverCert = /opt/splunk/etc/auth/mycerts/<combined certificate.pem> requireClientCert = true sslVersions = tls1.2 sslCommonNameToCheck = <search-head-1>,<search-head-2>,<search-head-3>,・・・ sslPassword = <RootCR password>   [replication_port://9887] [replication_port-ssl://9887] disabled = false serverCert = /opt/splunk/etc/auth/mycerts/<combined certificate.pem> requireClientCert = true sslVersions = tls1.2 sslCommonNameToCheck = <search-head-1>,<search-head-2>,<search-head-3> I use Google Translate to translate Japanese into English.
We are looking for Power Platform audit logs to ensure that these logs will automatically show up in SPLUNK if they are available in Purview.
We upgraded from 9.4.3 to 10.0 and now all the splunk forwarders are crashing because of the splunk-winevtlog service. How can I fix this?  is there a fix? Is anyone else experiencing these issues?  ... See more...
We upgraded from 9.4.3 to 10.0 and now all the splunk forwarders are crashing because of the splunk-winevtlog service. How can I fix this?  is there a fix? Is anyone else experiencing these issues?  I have had to disable all splunk instances because the service is a memory leak. 
Hi Community, I'm in the middle of installing different Apps in SOAR; done with BMC Helix however unable to find tarball for Ansible. Has anyone installed Ansible in SOAR? Can you please help from w... See more...
Hi Community, I'm in the middle of installing different Apps in SOAR; done with BMC Helix however unable to find tarball for Ansible. Has anyone installed Ansible in SOAR? Can you please help from where I can download and install it? An SOP would be helpful.   Splunk App for SOAR Ansible Monitoring and Diagnostics Ansible Tower Playbooks-On-Rails App (powered by Ansible) 
We have an index with a ton of data. A new use for the data has emerged, so now we want a longer retention time on some of the data in the index. We don't want to simply increase the retention time o... See more...
We have an index with a ton of data. A new use for the data has emerged, so now we want a longer retention time on some of the data in the index. We don't want to simply increase the retention time on the index, because the storage cost is too high. We want to create a new index with a longer retention, pick out the events we need, and copy them to the new index. This is on an indexer cluster. In theory, we could use collect, like this: index=oldindex field=the_events_we_need | collect index=newindex However, because the index is too big, we're having problems running this search. Even though we run it bit-by-bit, still we end up missing events in the new index. Could be due to performance or memory limits, or bucket issues. Is there a better and more reliable way of doing this?
I have installed and configured the DB connect under my deployer. Added identities and connections. Then copied the etc/apps/splunk_app_db_connect to /etc/shcluster/apps/ and pushed the bundle to sh... See more...
I have installed and configured the DB connect under my deployer. Added identities and connections. Then copied the etc/apps/splunk_app_db_connect to /etc/shcluster/apps/ and pushed the bundle to shcluster.  As per doc: https://help.splunk.com/en/splunk-cloud-platform/connect-relational-databases/deploy-and-use-splunk-db-connect/4.0/install-splunk-db-connect/install-and-configure-splunk-db-connect-on-a-splunk-enterprise-on-premise-distributed-platform-deployment App is deployed but identity.dat file is generated every 30s on sh members and that is different than on my deployer. DB Connect GUI on SH members gives me an error: "Identity password is corrupted." What did I miss?
splunk how to get splunk add-on for unix and linux 9.2.0 version and 6.0.2 version ..??
I have a question regarding creating browser tests in Synthetic Monitoring. The website I'm testing generates dynamic IDs for DOM elements, which makes it unreliable to use id attributes for actions ... See more...
I have a question regarding creating browser tests in Synthetic Monitoring. The website I'm testing generates dynamic IDs for DOM elements, which makes it unreliable to use id attributes for actions like clicking buttons or links. I attempted to use full XPath expressions instead, but the site frequently introduces banners (e.g., announcements) that alter the DOM structure and shift element positions, causing the XPath to break. I'm wondering if there's a more resilient approach to locating elements. For example, is it possible to run a JavaScript snippet to search for an element by its visible text or attribute value, and then use that reference in subsequent steps or click on the element via the JavaScript? If so, how can I implement this? Alternatively, are there best practices or recommended locator strategies for handling dynamic content in Synthetic browser tests?