All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hey, i have a problem with event breaking. My app outputs logs that starts with date and time in the format 15/05/2024 16:35:45 Some events have an object in them and can be accross multiple lines. ... See more...
Hey, i have a problem with event breaking. My app outputs logs that starts with date and time in the format 15/05/2024 16:35:45 Some events have an object in them and can be accross multiple lines. But every event starts with date and time. For some reason splunk sometimes combine two events. And sometimes cut off an event who has an object in it. I tried multiple configs in the props.conf such as LINE_BREAKER , SHOULD_LINEMERGE, and more. Im new to splunk and i would be grateful if u can help me
Hi everyone! Is there a way to troubleshoot and fix this issue? We have other instances, and they work fine. Internet 24.7 Mbps download, 65.2 Mbps upload, so it's ok. ssh and ping to the host works ... See more...
Hi everyone! Is there a way to troubleshoot and fix this issue? We have other instances, and they work fine. Internet 24.7 Mbps download, 65.2 Mbps upload, so it's ok. ssh and ping to the host works fine, only the web page does not work for me. Colleagues do not have this problem. http://ip:8080/en-US/account/login?return_to=%2Fen-US%2F   This page isn’t working <ip> didn’t send any data. ERR_EMPTY_RESPONSE    
@tt-nexteng  Examine the splunkd.log file on both the Universal Forwarder and the Indexer for any TLS-related error messages. This can provide clues about what might be going wrong. Review your Do... See more...
@tt-nexteng  Examine the splunkd.log file on both the Universal Forwarder and the Indexer for any TLS-related error messages. This can provide clues about what might be going wrong. Review your Docker configuration to ensure that all necessary ports are exposed and that the container has the correct network settings. https://docs.splunk.com/Documentation/Splunk/9.4.0/Security/Validateyourconfiguration
It does seem an oversight for the dashboards to have been updated with a specific index name rather than a macro.  This is an app build by Splunk Works, so whilst it isnt an official Splunk supporte... See more...
It does seem an oversight for the dashboards to have been updated with a specific index name rather than a macro.  This is an app build by Splunk Works, so whilst it isnt an official Splunk supported app, you might be able to log a support case with Splunk about it and hopefully they might revert this back to including the macros. Due to the way the dashboards are distributed and compiled, it looks like it would be hard to manually edit the dashboards yourself especially as you are running in Splunk Cloud. I think your best option at this point is raise a support case and take it from there, hopefully they will be able to push an updated version! Good luck and sorry I couldnt help further.
@SplunkExplorerYou can keep it in default stanza. [default] # Configure all indexes to use the SmartStore remote volume called # "remote_store". # Note: If you want only some of your indexes to u... See more...
@SplunkExplorerYou can keep it in default stanza. [default] # Configure all indexes to use the SmartStore remote volume called # "remote_store". # Note: If you want only some of your indexes to use SmartStore, # place this setting under the individual stanzas for each of the # SmartStore indexes, rather than here. remotePath = volume:remote_store/$_index_name repFactor = auto  
I have tried many times following this document, but I keep failing.   If I remove the TLS settings and revert these four configuration files to their original state before modification, everything... See more...
I have tried many times following this document, but I keep failing.   If I remove the TLS settings and revert these four configuration files to their original state before modification, everything works fine.   Additionally, I am running the indexer in Docker. Could this be related to the issue?
I don't see why you cannot set repFactor in [default].  homePath also is listed as a per-index option, but I put it in the default stanza all the time (successfully). Try it and let us know how it w... See more...
I don't see why you cannot set repFactor in [default].  homePath also is listed as a per-index option, but I put it in the default stanza all the time (successfully). Try it and let us know how it works.
@tt-nexteng  Check this video for reference https://www.youtube.com/watch?v=vI7466EwG7I
@tt-nextengCheck this documentation Configure Splunk indexing and forwarding to use TLS certificates - Splunk Documentation
1. This is $SPLUNK_HOME/etc/system/local/inputs.conf of my Indexer. [splunktcp-ssl:9997] disabled = 0 [SSL] serverCert = /opt/splunk/etc/auth/mycerts/myCombinedServerCertificate.pem sslPassword... See more...
1. This is $SPLUNK_HOME/etc/system/local/inputs.conf of my Indexer. [splunktcp-ssl:9997] disabled = 0 [SSL] serverCert = /opt/splunk/etc/auth/mycerts/myCombinedServerCertificate.pem sslPassword = $7$2sKE3fmGeaZOyBdYg6AfpoU1Gv7kXP3pEihEQoWlSKFeItPCn0lNyb0=  (myServerPrivateKeyPassword) requireClientCert = false   2. This is $SPLUNK_HOME/etc/system/local/server.conf of my Indexer. [general] serverName = 4b2c00e08e88 pass4SymmKey = $7$kbQmQuYtD+ees5uv8q+WaE36j8Sk07HcWoVgOMmP8Bb69nbwERriow== [sslConfig] sslPassword = $7$9eO6Wt/mPl2QIOEu/+xh44foXzSDvMRs/0LyNn/EuZ+ab/Q93LB8bg==(Default. I did not modify) sslRootCAPath = /opt/splunk/etc/auth/mycerts/myCertAuthCertificate.pem [lmpool:auto_generated_pool_download-trial] description = auto_generated_pool_download-trial peers = * quota = MAX stack_id = download-trial [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder peers = * quota = MAX stack_id = forwarder [lmpool:auto_generated_pool_free] description = auto_generated_pool_free peers = * quota = MAX stack_id = free   3. This is /opt/splunkforwarder/etc/system/local/outputs.conf of my UF. [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = 176.32.83.56:9997 disabled = 0 sslPassword = $7$M4MRBHX8rh11KC509o7cRe/QOxo3EZBA5pXjGn5cZuHtb0FO3dFj5ks=(myServerPrivateKeyPassword) sslVerifyServerCert = false   4. This is /opt/splunkforwarder/etc/system/local/server.conf of my UF. [general] serverName = suf pass4SymmKey = $7$hE1rQcMJG9ZPB0DvxG+KMGbMmNly4JylVUhFC59Nz+LBa+o1ahblmg== [sslConfig] sslPassword = $7$30hXe/EpmNqvXRzVPC0KF+1YNptHuhrxEnChvX5Se8ySRni+uAQFHWk=(Default. I did not modify) sslRootCAPath = /opt/splunkforwarder/etc/auth/mycerts/myCertAuthCertificate.pem [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder peers = * quota = MAX stack_id = forwarder [lmpool:auto_generated_pool_free] description = auto_generated_pool_free peers = * quota = MAX stack_id = free     The above configurations did not succeed. When I executed the following command: /opt/splunkforwarder/bin/splunk list forward-server   I got the following result. Active forwards: None Configured but inactive forwards: 176.32.83.56:9997  
Jenkins health dashboard is using index="jenkins_statistics  in base search instead of macro `jenkins_index` unlike previous version. Because of this change, the dashboard is now showing up any data ... See more...
Jenkins health dashboard is using index="jenkins_statistics  in base search instead of macro `jenkins_index` unlike previous version. Because of this change, the dashboard is now showing up any data in Splunk cloud.
Worked. I accomplished this by implementing a custom code that counts the characters.   Thanks for your input.
Hi @livehybrid    thanks for quick response . Unfortunately this is not working , I am attaching a screenshot of the same log which might help to understand it better    
Thank you very much.
Hi Splunkers, I have a very simple question. When I configure a Splunk indexes.conf, I know that one parameter I can configure is repFactor. In a scenario where SmartStore is used, we know that rep... See more...
Hi Splunkers, I have a very simple question. When I configure a Splunk indexes.conf, I know that one parameter I can configure is repFactor. In a scenario where SmartStore is used, we know that repFactor must be set equals to "auto", for each configured index. Here the question is this: following Splunk official documentation, repFactor is put under "Per index Options". Does it means that I cannot put it under [default] stanza? Because if, for SmartStore requirements, I need to configure it equals to auto for EVERY index, it could be fast and smart put it as a global setting. Luca  
I cant make out the full log example but please try the following updated SPL, which replaces [*] with {} | spath input=_raw path="attributes.attribute{}.name" output=name | spath input=_raw pat... See more...
I cant make out the full log example but please try the following updated SPL, which replaces [*] with {} | spath input=_raw path="attributes.attribute{}.name" output=name | spath input=_raw path="attributes.attribute{}.value" output=value | table name value
i have a sample xml which looks like this   script_family>Amazon Linux Local Security Checks</script_family> <filename>al2023_ALAS2023-2025-816.nasl</filename> <script_version>1.1</script_version> ... See more...
i have a sample xml which looks like this   script_family>Amazon Linux Local Security Checks</script_family> <filename>al2023_ALAS2023-2025-816.nasl</filename> <script_version>1.1</script_version> <script_name>Amazon Linux 2023 : runfinch-finch (ALAS2023-2025-816)</script_name> <script_copyright>This script is Copyright (C) 2025 and is owned by Tenable, Inc. or an Affiliate thereof.</script_copyright> <script_id>214620</script_id> <cves> <cve>CVE-2024-45338</cve> <cve>CVE-2024-51744</cve> </cves> <bids> </bids> <xrefs> </xrefs> <preferences> </preferences> <dependencies> <dependency>ssh_get_info.nasl</dependency> </dependencies> <required_keys> <required_key>Host/local_checks_enabled</required_key> <required_key>Host/AmazonLinux/release</required_key> <required_key>Host/AmazonLinux/rpm-list</required_key> </required_keys> <excluded_keys> </excluded_keys> <required_ports> </required_ports> <required_udp_ports> </required_udp_ports> <attributes> <attribute> <name>exploitability_ease</name> <value>No known exploits are available</value> </attribute> <attribute> <name>cvss3_temporal_vector</name> <value>CVSS:3.0/E:U/RL:O/RC:C</value> </attribute> <attribute> <name>vuln_publication_date</name> <value>2024/11/04</value> </attribute> <attribute> <name>cpe</name> <value>p-cpe:/a:amazon:linux:runfinch-finch cpe:/o:amazon:linux:2023</value> </attribute> <attribute> <name>cvss3_vector</name> <value>CVSS:3.0/AV:N/AC:H/PR:N/UI:R/S:U/C:L/I:N/A:N</value> </attribute i want to extract few fields from search using spath or similar methods field value payer should look somthing like key                                                               value exploitability_ease                             No known exploits are available cvss3_temporal_vector                  CVSS:3.0/E:U/RL:O/RC:C solution                                                    Run 'dnf update runfinch-finch --releasever 2023.6.20250123' to update        i tried something similar to this but no luck | spath input=_raw path="attributes.attribute[*].name" output=name | spath input=_raw path="attributes.attribute[*].value" output=value | table name value
For some time now, the class server agents on my deployment server have been in Pending status, and yet the logs are still coming. Does anyone know why?   Yet when I look at the forwarders page... See more...
For some time now, the class server agents on my deployment server have been in Pending status, and yet the logs are still coming. Does anyone know why?   Yet when I look at the forwarders page, alls agents status are Ok!     I just don't get it!
Hi @Dk123 , as @richgalloway said, what's your architetcure? you have to create the indexes.conf on the Indexers, so you can use them, but you don't see them in Search Heads. To use them in Indexe... See more...
Hi @Dk123 , as @richgalloway said, what's your architetcure? you have to create the indexes.conf on the Indexers, so you can use them, but you don't see them in Search Heads. To use them in Indexers and see them on Search Heads, you have to create the indexes on both the types of machines. If instead you have a stand aolone server, did you restarted Splunk after creation? Ciao. Giuseppe
Hi @Andre_ , I don't think that's possible, also because Universal Forwarder's configurations are usually managed using a Deployment Server. But, you could have very large inputs and take all the w... See more...
Hi @Andre_ , I don't think that's possible, also because Universal Forwarder's configurations are usually managed using a Deployment Server. But, you could have very large inputs and take all the weblogs or Apache logs, e.g. if your Apache logs are in the folder /opt/apache/<app>/data/<other_folders>/apache.log you could usi in your input: [monitor:///opt/apache/.../*.log] Ciao. Giuseppe