All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@tt-nextengCheck this documentation Configure Splunk indexing and forwarding to use TLS certificates - Splunk Documentation
1. This is $SPLUNK_HOME/etc/system/local/inputs.conf of my Indexer. [splunktcp-ssl:9997] disabled = 0 [SSL] serverCert = /opt/splunk/etc/auth/mycerts/myCombinedServerCertificate.pem sslPassword... See more...
1. This is $SPLUNK_HOME/etc/system/local/inputs.conf of my Indexer. [splunktcp-ssl:9997] disabled = 0 [SSL] serverCert = /opt/splunk/etc/auth/mycerts/myCombinedServerCertificate.pem sslPassword = $7$2sKE3fmGeaZOyBdYg6AfpoU1Gv7kXP3pEihEQoWlSKFeItPCn0lNyb0=  (myServerPrivateKeyPassword) requireClientCert = false   2. This is $SPLUNK_HOME/etc/system/local/server.conf of my Indexer. [general] serverName = 4b2c00e08e88 pass4SymmKey = $7$kbQmQuYtD+ees5uv8q+WaE36j8Sk07HcWoVgOMmP8Bb69nbwERriow== [sslConfig] sslPassword = $7$9eO6Wt/mPl2QIOEu/+xh44foXzSDvMRs/0LyNn/EuZ+ab/Q93LB8bg==(Default. I did not modify) sslRootCAPath = /opt/splunk/etc/auth/mycerts/myCertAuthCertificate.pem [lmpool:auto_generated_pool_download-trial] description = auto_generated_pool_download-trial peers = * quota = MAX stack_id = download-trial [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder peers = * quota = MAX stack_id = forwarder [lmpool:auto_generated_pool_free] description = auto_generated_pool_free peers = * quota = MAX stack_id = free   3. This is /opt/splunkforwarder/etc/system/local/outputs.conf of my UF. [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = 176.32.83.56:9997 disabled = 0 sslPassword = $7$M4MRBHX8rh11KC509o7cRe/QOxo3EZBA5pXjGn5cZuHtb0FO3dFj5ks=(myServerPrivateKeyPassword) sslVerifyServerCert = false   4. This is /opt/splunkforwarder/etc/system/local/server.conf of my UF. [general] serverName = suf pass4SymmKey = $7$hE1rQcMJG9ZPB0DvxG+KMGbMmNly4JylVUhFC59Nz+LBa+o1ahblmg== [sslConfig] sslPassword = $7$30hXe/EpmNqvXRzVPC0KF+1YNptHuhrxEnChvX5Se8ySRni+uAQFHWk=(Default. I did not modify) sslRootCAPath = /opt/splunkforwarder/etc/auth/mycerts/myCertAuthCertificate.pem [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder peers = * quota = MAX stack_id = forwarder [lmpool:auto_generated_pool_free] description = auto_generated_pool_free peers = * quota = MAX stack_id = free     The above configurations did not succeed. When I executed the following command: /opt/splunkforwarder/bin/splunk list forward-server   I got the following result. Active forwards: None Configured but inactive forwards: 176.32.83.56:9997  
Jenkins health dashboard is using index="jenkins_statistics  in base search instead of macro `jenkins_index` unlike previous version. Because of this change, the dashboard is now showing up any data ... See more...
Jenkins health dashboard is using index="jenkins_statistics  in base search instead of macro `jenkins_index` unlike previous version. Because of this change, the dashboard is now showing up any data in Splunk cloud.
Worked. I accomplished this by implementing a custom code that counts the characters.   Thanks for your input.
Hi @livehybrid    thanks for quick response . Unfortunately this is not working , I am attaching a screenshot of the same log which might help to understand it better    
Thank you very much.
Hi Splunkers, I have a very simple question. When I configure a Splunk indexes.conf, I know that one parameter I can configure is repFactor. In a scenario where SmartStore is used, we know that rep... See more...
Hi Splunkers, I have a very simple question. When I configure a Splunk indexes.conf, I know that one parameter I can configure is repFactor. In a scenario where SmartStore is used, we know that repFactor must be set equals to "auto", for each configured index. Here the question is this: following Splunk official documentation, repFactor is put under "Per index Options". Does it means that I cannot put it under [default] stanza? Because if, for SmartStore requirements, I need to configure it equals to auto for EVERY index, it could be fast and smart put it as a global setting. Luca  
I cant make out the full log example but please try the following updated SPL, which replaces [*] with {} | spath input=_raw path="attributes.attribute{}.name" output=name | spath input=_raw pat... See more...
I cant make out the full log example but please try the following updated SPL, which replaces [*] with {} | spath input=_raw path="attributes.attribute{}.name" output=name | spath input=_raw path="attributes.attribute{}.value" output=value | table name value
i have a sample xml which looks like this   script_family>Amazon Linux Local Security Checks</script_family> <filename>al2023_ALAS2023-2025-816.nasl</filename> <script_version>1.1</script_version> ... See more...
i have a sample xml which looks like this   script_family>Amazon Linux Local Security Checks</script_family> <filename>al2023_ALAS2023-2025-816.nasl</filename> <script_version>1.1</script_version> <script_name>Amazon Linux 2023 : runfinch-finch (ALAS2023-2025-816)</script_name> <script_copyright>This script is Copyright (C) 2025 and is owned by Tenable, Inc. or an Affiliate thereof.</script_copyright> <script_id>214620</script_id> <cves> <cve>CVE-2024-45338</cve> <cve>CVE-2024-51744</cve> </cves> <bids> </bids> <xrefs> </xrefs> <preferences> </preferences> <dependencies> <dependency>ssh_get_info.nasl</dependency> </dependencies> <required_keys> <required_key>Host/local_checks_enabled</required_key> <required_key>Host/AmazonLinux/release</required_key> <required_key>Host/AmazonLinux/rpm-list</required_key> </required_keys> <excluded_keys> </excluded_keys> <required_ports> </required_ports> <required_udp_ports> </required_udp_ports> <attributes> <attribute> <name>exploitability_ease</name> <value>No known exploits are available</value> </attribute> <attribute> <name>cvss3_temporal_vector</name> <value>CVSS:3.0/E:U/RL:O/RC:C</value> </attribute> <attribute> <name>vuln_publication_date</name> <value>2024/11/04</value> </attribute> <attribute> <name>cpe</name> <value>p-cpe:/a:amazon:linux:runfinch-finch cpe:/o:amazon:linux:2023</value> </attribute> <attribute> <name>cvss3_vector</name> <value>CVSS:3.0/AV:N/AC:H/PR:N/UI:R/S:U/C:L/I:N/A:N</value> </attribute i want to extract few fields from search using spath or similar methods field value payer should look somthing like key                                                               value exploitability_ease                             No known exploits are available cvss3_temporal_vector                  CVSS:3.0/E:U/RL:O/RC:C solution                                                    Run 'dnf update runfinch-finch --releasever 2023.6.20250123' to update        i tried something similar to this but no luck | spath input=_raw path="attributes.attribute[*].name" output=name | spath input=_raw path="attributes.attribute[*].value" output=value | table name value
For some time now, the class server agents on my deployment server have been in Pending status, and yet the logs are still coming. Does anyone know why?   Yet when I look at the forwarders page... See more...
For some time now, the class server agents on my deployment server have been in Pending status, and yet the logs are still coming. Does anyone know why?   Yet when I look at the forwarders page, alls agents status are Ok!     I just don't get it!
Hi @Dk123 , as @richgalloway said, what's your architetcure? you have to create the indexes.conf on the Indexers, so you can use them, but you don't see them in Search Heads. To use them in Indexe... See more...
Hi @Dk123 , as @richgalloway said, what's your architetcure? you have to create the indexes.conf on the Indexers, so you can use them, but you don't see them in Search Heads. To use them in Indexers and see them on Search Heads, you have to create the indexes on both the types of machines. If instead you have a stand aolone server, did you restarted Splunk after creation? Ciao. Giuseppe
Hi @Andre_ , I don't think that's possible, also because Universal Forwarder's configurations are usually managed using a Deployment Server. But, you could have very large inputs and take all the w... See more...
Hi @Andre_ , I don't think that's possible, also because Universal Forwarder's configurations are usually managed using a Deployment Server. But, you could have very large inputs and take all the weblogs or Apache logs, e.g. if your Apache logs are in the folder /opt/apache/<app>/data/<other_folders>/apache.log you could usi in your input: [monitor:///opt/apache/.../*.log] Ciao. Giuseppe  
yep .. we are doing just that - 1st you need to capture a batch job failing - this can be done in a number of ways such as writing the batch status to a log file to capture failures. - Then monitor ... See more...
yep .. we are doing just that - 1st you need to capture a batch job failing - this can be done in a number of ways such as writing the batch status to a log file to capture failures. - Then monitor that log file and create a KPI - Create a custom alert action that runs a batch job restart - in the neap create the logic that when the KPI picks up a failing batch job then trigger the custom alert action - then you need another correlation search to capture the batch job being successful and correlate with the KPI returning to normal to complete the cycle
Hi, Our project is planning to have Splunk ITSI to do batch monitoring from Control M jobs and have autohealing as well. Would that be feasible with Splunk ITSI? Does Splunk ITSI have capabilities t... See more...
Hi, Our project is planning to have Splunk ITSI to do batch monitoring from Control M jobs and have autohealing as well. Would that be feasible with Splunk ITSI? Does Splunk ITSI have capabilities to take action like running a custom script to force restart, or force OK a Control M job once conditions are met to be ? Looking forward to your insights.
I can find evidence to back this up, but hard to find any real case. In our case, we're limited by resources, so we can't add more deployers. We're using two search head clusters to keep things separ... See more...
I can find evidence to back this up, but hard to find any real case. In our case, we're limited by resources, so we can't add more deployers. We're using two search head clusters to keep things separate. This way, if one cluster needs to handle a lot of saved searches, it won't slow down the other one.
Hello, Is it possible to configure a Universal Forwarder to automatically discover the location of weblogs for IIS or Apache? I can programmatically get the locations and have a script for Windows a... See more...
Hello, Is it possible to configure a Universal Forwarder to automatically discover the location of weblogs for IIS or Apache? I can programmatically get the locations and have a script for Windows and Linux that returns a list of locations.  Kind Regards Andre
curl -kv https://<host>.<stack>.splunkcloud.com:8088/services/collector/health Where <host>.<stack>, you will find if you run `index=_internal` and see `host` field.
Finally found the working URI for demo version of Splunk Cloud Platform. curl -kv https://<host>.<stack>.splunkcloud.com:8088/services/collector/health Where <host>.<stack>, you will find if yo... See more...
Finally found the working URI for demo version of Splunk Cloud Platform. curl -kv https://<host>.<stack>.splunkcloud.com:8088/services/collector/health Where <host>.<stack>, you will find if you run `index=_internal` and see `host` field.
Likely HEC not available for 14 days demo version of Splunk Cloud Platform.
A standalone Deployment Server that is not functioning with any other server roles should be able to handle up to 25,000 forwarders. In the past, customers with large deployments have set up Deploym... See more...
A standalone Deployment Server that is not functioning with any other server roles should be able to handle up to 25,000 forwarders. In the past, customers with large deployments have set up Deployment Servers behind a load balancer and kept apps sync'd between them using tools such as Puppet or Ansible. Since 9.2.0, Splunk Deployment Servers have been architected to work as a "cluster" behind a load balancer and to keep apps and client status sync'd between them via a shared network directory. This allows any number of forwarders to be managed. For example, for an environment capturing data from 100,000 forwarders, a cluster of at least 4 Deployment Servers would be a good place to start.