@SelvaganeshE The "IP Allow List" feature is specific to Splunk Cloud and is not available in Splunk Enterprise (on-premise) deployments. For integrating Splunk Enterprise with AppDynamics SaaS, you ...
See more...
@SelvaganeshE The "IP Allow List" feature is specific to Splunk Cloud and is not available in Splunk Enterprise (on-premise) deployments. For integrating Splunk Enterprise with AppDynamics SaaS, you might need to look into alternative methods for securing and managing access, such as configuring firewall rules or using other network security measures.
Hi everyone, I am testing the Smart Agent appdcli utility and encountered an issue. When I try to UPGRADE a machine agent by running the following command: appd upgrade machine --inventory hosts.i...
See more...
Hi everyone, I am testing the Smart Agent appdcli utility and encountered an issue. When I try to UPGRADE a machine agent by running the following command: appd upgrade machine --inventory hosts.ini --connection ssh --config config.ini --version latest The agent starts communicating with the Controller, but ServerMonitoring fails (see figure_1). However, when I try to INSTALL the same agent and version by the following command: appd install machine --inventory hosts.ini --connection ssh --config config.ini --version latest Everything works fine (see figure_2). Do you have any idea why? The problem only appears when I upgrade a machine agent running on Linux (Ubuntu-23.10 [mantic]). On Windows, I have not encountered this issue. Regards, Lukas
Hello Team We try to integrate Splunk Enterprise (Version: 9.3.2) with AppDynamics SaaS (Version 24.10). As per the document, we need to add the AppD SaaS IP address in the Search head API in splunk...
See more...
Hello Team We try to integrate Splunk Enterprise (Version: 9.3.2) with AppDynamics SaaS (Version 24.10). As per the document, we need to add the AppD SaaS IP address in the Search head API in splunk. To add the IP address, I have navigated to Server Settings but unable to see "IP Allow List" option in splunk console. Note: I have logged into Splunk with admin ID. Please help me to fix the issue. Thanks, Selvaganesh E
Dashboard studio -Error while updating auto refresh value. [Error: Visualization is not present in layout structure]: Visualization "viz_XQInZkvE" is not present in Layout Structure, My last panel...
See more...
Dashboard studio -Error while updating auto refresh value. [Error: Visualization is not present in layout structure]: Visualization "viz_XQInZkvE" is not present in Layout Structure, My last panel is: If i'm trying to change refresh rate from 2m to any other time i get above error. It looks like some default value or cloned from some other dashboard. Could somone help on this? "title": "E2E Customer Migration Flow - MigrationEngine + NCRM Clone", "description": "BPM Dashboard.SparkSupportGroup:Sky DE - Digital Sales - Product Selection & Registration", "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" }, "refresh": "2m" } } } } }
Please provide some sample events (anonymised appropriately) and a non-SPL description of what you are trying to achieve. It would also help to know what it is about your current search that does not...
See more...
Please provide some sample events (anonymised appropriately) and a non-SPL description of what you are trying to achieve. It would also help to know what it is about your current search that does not provide the information you require.
When I asked this question, I had already added the following setting under [sslConfig] in both my Indexer and UF's server.conf: sslRootCAPath = /opt/splunkforwarder/etc/auth/mycerts/myCertAuthCerti...
See more...
When I asked this question, I had already added the following setting under [sslConfig] in both my Indexer and UF's server.conf: sslRootCAPath = /opt/splunkforwarder/etc/auth/mycerts/myCertAuthCertificate.pem However, I still encountered the same issue as described in my original question. Additionally, my Indexer's inputs.conf is configured as follows: [splunktcp-ssl:9997]
disabled = 0
[SSL]
serverCert = /opt/splunk/etc/auth/mycerts/myCombinedServerCertificate.pem
sslPassword = ServerCertPassword
requireClientCert = false I have followed Splunk's official documentation and tried various configurations, but all attempts failed. Then, I found a 2017 post on the Splunk Community forum and decided to try the suggested configuration. That configuration is exactly what I am using now, and it worked successfully. I don't fully understand this configuration, so I have asked these three questions.
@interrobang How about something like this? index=_internal group=per_index_thruput series=*
| bin _time span=10m
| stats count by _time host
| stats list(*) AS * by _time
| table _time host co...
See more...
@interrobang How about something like this? index=_internal group=per_index_thruput series=*
| bin _time span=10m
| stats count by _time host
| stats list(*) AS * by _time
| table _time host count Which produces a table that looks like: Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
Hi @tt-nexteng Do you have requireClientCert set within your inputs.conf file on your receiving Splunk instance? sslCertPath in the outputs.conf is actually deprecated and clientCert should be spe...
See more...
Hi @tt-nexteng Do you have requireClientCert set within your inputs.conf file on your receiving Splunk instance? sslCertPath in the outputs.conf is actually deprecated and clientCert should be specified instead, although obviously this is only if you intend to use MutualAuth. sslRootCAPath in the outputs.conf is also deprecated and instead should be set in server.conf under the [sslConfig] stanza. Perhaps the CA isnt being picked up by the output processor and therefore it is using the combined cert you have specified in the sslCertPath. Try updating your server.conf/[sslConfig]/sslRootCAPath to your CA file and then try to see if this resolves the issue. Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
The Jamf Pro Add-On for Splunk does not work with Splunk Cloud. We have spent days trying to get this working with both Jamf and Splunk, only to find that this setup is currently incompatible. This ...
See more...
The Jamf Pro Add-On for Splunk does not work with Splunk Cloud. We have spent days trying to get this working with both Jamf and Splunk, only to find that this setup is currently incompatible. This has been confirmed by both Jamf and Splunk. It appears that the 'Jamf Protect Add-On' is compatible with Splunk Cloud. Hopefully these two add-ons are similar in construction and the Jamf Pro Add-On can be updated ASAP. https://splunkbase.splunk.com/app/4729 https://learn.jamf.com/en-US/bundle/technical-paper-splunk-current/page/Integrating_Splunk_with_Jamf_Pro.html Thanks!
You will not be able to put the KV Store into maintenance mode with a dynamic captain, to get around this you can temporarily change to a static captain using the following command /opt/splunk/bin/s...
See more...
You will not be able to put the KV Store into maintenance mode with a dynamic captain, to get around this you can temporarily change to a static captain using the following command /opt/splunk/bin/splunk edit shcluster-config -mode member -captain_uri https://your-Captain-SH-address:8089 -election false After this you should be able to check that dynamic_captain is 0 (splunk show shcluster-status) and then be able to enable maintenance mode. Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
I have resolved this issue. The cause was in the UF outputs.conf configuration. Thank you all for your help. However, I don't understand why this configuration is required. I have posted a new que...
See more...
I have resolved this issue. The cause was in the UF outputs.conf configuration. Thank you all for your help. However, I don't understand why this configuration is required. I have posted a new question. https://community.splunk.com/t5/Security/Questions-about-UF-outputs-conf-Configuration/m-p/710701#M18322
I am configuring TLS communication between UF (Universal Forwarder) and Indexer. My outputs.conf configuration is as follows: [tcpout]
defaultGroup = default-autolb-group
[tcpout-server://xxxxx...
See more...
I am configuring TLS communication between UF (Universal Forwarder) and Indexer. My outputs.conf configuration is as follows: [tcpout]
defaultGroup = default-autolb-group
[tcpout-server://xxxxxxx:9997]
[tcpout:default-autolb-group]
server = xxxxxxx:9997
disabled = false
sslPassword = ServerCertPassword
sslRootCAPath = /opt/splunkforwarder/etc/auth/mycerts/myCertAuthCertificate.pem
sslVerifyServerCert = false
useACK = true
sslCertPath = /opt/splunkforwarder/etc/auth/mycerts/myCombinedServerCertificate.pem I have three questions: 1. I don't need a client certificate right now. If I don't set sslCertPath, an error occurs. Is this option mandatory? 2. Currently, I have set sslCertPath to the server certificate, and TLS communication works. Why do I need to set the server certificate on the client? Is this a common practice? 3. If I want to use a client certificate, which configuration setting should I use?
Hi @SN1 You can modify the search below to use the metrics.log to get this information, update the series= value with the index name you want to look at, and you may also want to exclude your index...
See more...
Hi @SN1 You can modify the search below to use the metrics.log to get this information, update the series= value with the index name you want to look at, and you may also want to exclude your indexer(s) as these also collect the metrics on index thruput index=_internal series=YourIndex group=per_index_thruput host!=YourIndexer*
| eval gb=kb/1024/1024
| timechart sum(gb) AS gb by host This will give a chart showing the GB of data for each forwarder. Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
Hello I have a index name msad and i want to know which forwarder is sending data to this index . And also the data it is sending is stored where like from where this forwarder is sending this data.
Hi @KwonTaeHoon Have you installed the Python for Scientific Computing (PSC) app from Splunkbase? This is a pre-req for MLTK (see https://docs.splunk.com/Documentation/MLApp/5.5.0/User/Installandco...
See more...
Hi @KwonTaeHoon Have you installed the Python for Scientific Computing (PSC) app from Splunkbase? This is a pre-req for MLTK (see https://docs.splunk.com/Documentation/MLApp/5.5.0/User/Installandconfigure) The pandas library is within the PSC app at: (Splunk_SA_Scientific_Python_linux_x86_64)/bin/linux_x86_64/4_2_2/lib/python3.9/site-packages/pandas This is assuming you are running the latest PSC app on linux_x86_64. Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
Why not start with your actual events? OK, assuming these now represent your events, try something like this instead | rex "The following products did not have mappings from PC: (?<product>\S+)"