Dashboard studio -Error while updating auto refresh value. [Error: Visualization is not present in layout structure]: Visualization "viz_XQInZkvE" is not present in Layout Structure, My last panel...
See more...
Dashboard studio -Error while updating auto refresh value. [Error: Visualization is not present in layout structure]: Visualization "viz_XQInZkvE" is not present in Layout Structure, My last panel is: If i'm trying to change refresh rate from 2m to any other time i get above error. It looks like some default value or cloned from some other dashboard. Could somone help on this? "title": "E2E Customer Migration Flow - MigrationEngine + NCRM Clone", "description": "BPM Dashboard.SparkSupportGroup:Sky DE - Digital Sales - Product Selection & Registration", "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" }, "refresh": "2m" } } } } }
Please provide some sample events (anonymised appropriately) and a non-SPL description of what you are trying to achieve. It would also help to know what it is about your current search that does not...
See more...
Please provide some sample events (anonymised appropriately) and a non-SPL description of what you are trying to achieve. It would also help to know what it is about your current search that does not provide the information you require.
When I asked this question, I had already added the following setting under [sslConfig] in both my Indexer and UF's server.conf: sslRootCAPath = /opt/splunkforwarder/etc/auth/mycerts/myCertAuthCerti...
See more...
When I asked this question, I had already added the following setting under [sslConfig] in both my Indexer and UF's server.conf: sslRootCAPath = /opt/splunkforwarder/etc/auth/mycerts/myCertAuthCertificate.pem However, I still encountered the same issue as described in my original question. Additionally, my Indexer's inputs.conf is configured as follows: [splunktcp-ssl:9997]
disabled = 0
[SSL]
serverCert = /opt/splunk/etc/auth/mycerts/myCombinedServerCertificate.pem
sslPassword = ServerCertPassword
requireClientCert = false I have followed Splunk's official documentation and tried various configurations, but all attempts failed. Then, I found a 2017 post on the Splunk Community forum and decided to try the suggested configuration. That configuration is exactly what I am using now, and it worked successfully. I don't fully understand this configuration, so I have asked these three questions.
@interrobang How about something like this? index=_internal group=per_index_thruput series=*
| bin _time span=10m
| stats count by _time host
| stats list(*) AS * by _time
| table _time host co...
See more...
@interrobang How about something like this? index=_internal group=per_index_thruput series=*
| bin _time span=10m
| stats count by _time host
| stats list(*) AS * by _time
| table _time host count Which produces a table that looks like: Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
Hi @tt-nexteng Do you have requireClientCert set within your inputs.conf file on your receiving Splunk instance? sslCertPath in the outputs.conf is actually deprecated and clientCert should be spe...
See more...
Hi @tt-nexteng Do you have requireClientCert set within your inputs.conf file on your receiving Splunk instance? sslCertPath in the outputs.conf is actually deprecated and clientCert should be specified instead, although obviously this is only if you intend to use MutualAuth. sslRootCAPath in the outputs.conf is also deprecated and instead should be set in server.conf under the [sslConfig] stanza. Perhaps the CA isnt being picked up by the output processor and therefore it is using the combined cert you have specified in the sslCertPath. Try updating your server.conf/[sslConfig]/sslRootCAPath to your CA file and then try to see if this resolves the issue. Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
The Jamf Pro Add-On for Splunk does not work with Splunk Cloud. We have spent days trying to get this working with both Jamf and Splunk, only to find that this setup is currently incompatible. This ...
See more...
The Jamf Pro Add-On for Splunk does not work with Splunk Cloud. We have spent days trying to get this working with both Jamf and Splunk, only to find that this setup is currently incompatible. This has been confirmed by both Jamf and Splunk. It appears that the 'Jamf Protect Add-On' is compatible with Splunk Cloud. Hopefully these two add-ons are similar in construction and the Jamf Pro Add-On can be updated ASAP. https://splunkbase.splunk.com/app/4729 https://learn.jamf.com/en-US/bundle/technical-paper-splunk-current/page/Integrating_Splunk_with_Jamf_Pro.html Thanks!
You will not be able to put the KV Store into maintenance mode with a dynamic captain, to get around this you can temporarily change to a static captain using the following command /opt/splunk/bin/s...
See more...
You will not be able to put the KV Store into maintenance mode with a dynamic captain, to get around this you can temporarily change to a static captain using the following command /opt/splunk/bin/splunk edit shcluster-config -mode member -captain_uri https://your-Captain-SH-address:8089 -election false After this you should be able to check that dynamic_captain is 0 (splunk show shcluster-status) and then be able to enable maintenance mode. Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
I have resolved this issue. The cause was in the UF outputs.conf configuration. Thank you all for your help. However, I don't understand why this configuration is required. I have posted a new que...
See more...
I have resolved this issue. The cause was in the UF outputs.conf configuration. Thank you all for your help. However, I don't understand why this configuration is required. I have posted a new question. https://community.splunk.com/t5/Security/Questions-about-UF-outputs-conf-Configuration/m-p/710701#M18322
I am configuring TLS communication between UF (Universal Forwarder) and Indexer. My outputs.conf configuration is as follows: [tcpout]
defaultGroup = default-autolb-group
[tcpout-server://xxxxx...
See more...
I am configuring TLS communication between UF (Universal Forwarder) and Indexer. My outputs.conf configuration is as follows: [tcpout]
defaultGroup = default-autolb-group
[tcpout-server://xxxxxxx:9997]
[tcpout:default-autolb-group]
server = xxxxxxx:9997
disabled = false
sslPassword = ServerCertPassword
sslRootCAPath = /opt/splunkforwarder/etc/auth/mycerts/myCertAuthCertificate.pem
sslVerifyServerCert = false
useACK = true
sslCertPath = /opt/splunkforwarder/etc/auth/mycerts/myCombinedServerCertificate.pem I have three questions: 1. I don't need a client certificate right now. If I don't set sslCertPath, an error occurs. Is this option mandatory? 2. Currently, I have set sslCertPath to the server certificate, and TLS communication works. Why do I need to set the server certificate on the client? Is this a common practice? 3. If I want to use a client certificate, which configuration setting should I use?
Hi @SN1 You can modify the search below to use the metrics.log to get this information, update the series= value with the index name you want to look at, and you may also want to exclude your index...
See more...
Hi @SN1 You can modify the search below to use the metrics.log to get this information, update the series= value with the index name you want to look at, and you may also want to exclude your indexer(s) as these also collect the metrics on index thruput index=_internal series=YourIndex group=per_index_thruput host!=YourIndexer*
| eval gb=kb/1024/1024
| timechart sum(gb) AS gb by host This will give a chart showing the GB of data for each forwarder. Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
Hello I have a index name msad and i want to know which forwarder is sending data to this index . And also the data it is sending is stored where like from where this forwarder is sending this data.
Hi @KwonTaeHoon Have you installed the Python for Scientific Computing (PSC) app from Splunkbase? This is a pre-req for MLTK (see https://docs.splunk.com/Documentation/MLApp/5.5.0/User/Installandco...
See more...
Hi @KwonTaeHoon Have you installed the Python for Scientific Computing (PSC) app from Splunkbase? This is a pre-req for MLTK (see https://docs.splunk.com/Documentation/MLApp/5.5.0/User/Installandconfigure) The pandas library is within the PSC app at: (Splunk_SA_Scientific_Python_linux_x86_64)/bin/linux_x86_64/4_2_2/lib/python3.9/site-packages/pandas This is assuming you are running the latest PSC app on linux_x86_64. Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
Why not start with your actual events? OK, assuming these now represent your events, try something like this instead | rex "The following products did not have mappings from PC: (?<product>\S+)"
For your first issue, relating to the data not being received into Splunk, please check the your inputs.conf on the UF is setup to the same index name as you have defined in the indexes.conf on your ...
See more...
For your first issue, relating to the data not being received into Splunk, please check the your inputs.conf on the UF is setup to the same index name as you have defined in the indexes.conf on your indexers. You should also check that your user has permissions to see this index on your searchhead. Regarding the mongo issue, please can you confirm which version of Splunk you are running and if was a fresh install or an upgrade from a previous version? Its worth starting with splunkd.log/mongod.log in $SPLUNK_HOME/var/log/splunk (or look in the _internal index for these logs) to see if there are any error/fatal/critical errors that might point to why Mongo isnt starting. If you find anything in the logs then let us know here and we can try and help you work through it. Regards Will
There arent any official Cloudformation Templates for sending data from Cloudtrail to S3/SQS as far as I know, however there is Project Trumpet (https://github.com/splunk/splunk-aws-project-trumpet) ...
See more...
There arent any official Cloudformation Templates for sending data from Cloudtrail to S3/SQS as far as I know, however there is Project Trumpet (https://github.com/splunk/splunk-aws-project-trumpet) which helps create Cloudformation for HEC based Cloudtrail (and others) feeds from AWS, if this helps? If you are using Splunk Cloud then you can also use the Data Manager app which can help setup AWS feeds into Splunk, but again I believe this uses Firehose/HEC rather than SQS based S3. I hope one of these helps you get started. Will