All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Balaji.M, Let's see if the Community can jump in and offer any ideas.
Clayton Homes faced the increased challenge of strengthening their security posture as they went through rapid digital transformation. The challenge was further exacerbated by the hybrid cloud re... See more...
Clayton Homes faced the increased challenge of strengthening their security posture as they went through rapid digital transformation. The challenge was further exacerbated by the hybrid cloud reality as Clayton Homes moved more deployments to the cloud. They wanted a better way to build a secure and more resilient digital world while migrating to the cloud. Join us in this webinar to hear from Clayton Homes on how to build scalable security while moving to the cloud successfully and efficiently with Splunk. By deploying Splunk Enterprise Security, a data-centric modern information and event management (SIEM) solution in the cloud, Clayton Homes was able to detect and respond to threats quickly. Hear how Splunk enabled Clayton Homes to gain end-to-end visibility across their IT environment with Splunk Cloud Platform without the need to purchase, manage or deploy infrastructure. In the webinar you will learn more from Clayton Homes on the best practices for: Migrating on-prem deployments to the cloud with success Harnessing data-driven insights with scalable security to protect your business and mitigate risks Building a solid foundation for your hybrid cloud with the right tools, expertise and services from Splunk Register Now!
Hi @PranaySompalli! Thank you for your follow-up question. Can please post your question as a new thread to help gain more visibility / up-to-date answers? Thanks!   -Kara, Splunk Community Manager
If that is the way, it will be a total waste of time. 
Thank you. Will give it a try and let the forum know. Greatly appreciate the response and path forward. Regards, Greg
IMO, user Nobody should not be used.  All scheduled searches should be owned by a real user, even if it's a service account.  That means the user running the search would have a role that specifies w... See more...
IMO, user Nobody should not be used.  All scheduled searches should be owned by a real user, even if it's a service account.  That means the user running the search would have a role that specifies what accesses and resources the search has. When a search runs manually, it takes on the role of the person running it (unless set to "run as owner"). Make sure the search in question has read access to all of the knowledge objects it needs.  IOW, each KO should be set to "Everyone" in the Read column (if using Nobody, that is; otherwise, set the permissions for the roles that need access).
The scheduler distributes scheduled searches across the whole cluster (with an exception of the captain if it's configured as ad-hoc only) so you can't really force a particular search to be run on t... See more...
The scheduler distributes scheduled searches across the whole cluster (with an exception of the captain if it's configured as ad-hoc only) so you can't really force a particular search to be run on this single SH. That would defeat the point of creating a SHC in the first place. If you want a search-head to run some specific searches - just set up a non-clustered SH for it.
Single search is run on a single processor. That's by design. There is a parallelization in two cases: 1) You run multiple searches at the same time. 2) You distribute a search between many indexe... See more...
Single search is run on a single processor. That's by design. There is a parallelization in two cases: 1) You run multiple searches at the same time. 2) You distribute a search between many indexers. But on a single Splunk component (Search Head, Indexer) a single search thread occupies a single processor.
By default, Splunk uses all CPUs on the system.  An individual search, however, is limited to a single CPU. How did you set up your Splunk to have singular processing (and what exactly do you mean b... See more...
By default, Splunk uses all CPUs on the system.  An individual search, however, is limited to a single CPU. How did you set up your Splunk to have singular processing (and what exactly do you mean by that)?
Thank You very much @richgalloway  As You suggested, the following worked: index = os_sysmon NOT Image="*Sysmon*" EventCode=1 | rex field=Image "(?P<Executable>[^\\\]+)$" | table Image Execut... See more...
Thank You very much @richgalloway  As You suggested, the following worked: index = os_sysmon NOT Image="*Sysmon*" EventCode=1 | rex field=Image "(?P<Executable>[^\\\]+)$" | table Image Executable
Can an alert be run from a specific Search Head in a clustered environment?  We would like to configure report from a specific search head that is in the clustered environment, and we dont want the ... See more...
Can an alert be run from a specific Search Head in a clustered environment?  We would like to configure report from a specific search head that is in the clustered environment, and we dont want the report to be replicated across all of the SH's.  Can we force to run the report from specific SH based on the app?  Thanks, Dhana
Characters to be retained should be enclosed in a capture group and that group referenced in the replacement text. | rex mode=sed field=ip "s/:(.{1,3})::/:\1:0:/g"
Since the Image field does not contain the string "Executable=" the regular expression does not match and rex extracts nothing.  Try removing "Executable=" from the command.
Hi Team, Not able to see the ABAP system details in AppDynamics Controller. Getting the below error. HTTP server yourcompany.saas.appdynamics.com (URI '/controller/rest/applications') responded w... See more...
Hi Team, Not able to see the ABAP system details in AppDynamics Controller. Getting the below error. HTTP server yourcompany.saas.appdynamics.com (URI '/controller/rest/applications') responded with status code 500 (Connection Broken) Regards, Giridhar ^ Post edited by @Ryan.Paredez minor formatting changes
How to replace string using rex with partial matched string? Thank you for your help. For example: I tried to replace "::" (double colon) with ":0:"  (colon zero colon) if it the previous characte... See more...
How to replace string using rex with partial matched string? Thank you for your help. For example: I tried to replace "::" (double colon) with ":0:"  (colon zero colon) if it the previous characters contain ":" followed by "1 to 3 characters"  | rex mode=sed field=ip "s/:.{1,3}::/:.{1,3}:0:/g"   => this does not work because it will literally replace it with ":.{1,3}:0:"   instead of retaining the matched strings Before a0:1::21 b0:1c::21 c0:a13::23 After a0:1:0:21 b0:11:0:21 c0:111:0:23
Hi @muqeeiz, sorry, but I don't see any error in the messages you shared! anyway, check the permissions on the files to read. Ciao. Giuseppe
Hi,   I found a problem in Splunk DB connect when I tried to add a new input. I can add new connections and the current inputs are working. But when I try to add new input or try to use the “SQL ... See more...
Hi,   I found a problem in Splunk DB connect when I tried to add a new input. I can add new connections and the current inputs are working. But when I try to add new input or try to use the “SQL Explorer” after choosing connection I get a “Cannot get schemas“ error message. In the _internal index I found this error message: „Unable to get schemas metadata java.sql.SQLException: Non supported character set (add orai18n.jar in your classpath): EE8ISO8859P2” After this I updated DB Connect and the Oracle JDBC drivers but it did not help. I consulted our DB admins. As it turned out these DBs really differ when it comes to character encoding (EE8ISO8859P2 vs. UTF8). Of course the orai18n.jar file is there with the driver. I found this in the documentation: https://docs.splunk.com/Documentation/DBX/3.14.1/DeployDBX/Troubleshooting#Unicode_decode_errors "Splunk DB Connect requires you to set up your database connection to return results encoded in UTF-8. Consult your database vendor's documentation for instructions about how to do this." Is it possible that DB Connect can only handle Oracle DB with UTF-8 encoding?   Thanks, László  
Finally found an answer after MUCH searching:  Only starting in v9.0.2303 per this LinkedIn post
Hi,  my logs do not appear in the index and in splunkd.log i get the following error   09-21-2023 16:36:40.693 +0200 INFO AutoLoadBalancedConnectionStrategy [7698 TcpOutEloop] - Connected to idx=... See more...
Hi,  my logs do not appear in the index and in splunkd.log i get the following error   09-21-2023 16:36:40.693 +0200 INFO AutoLoadBalancedConnectionStrategy [7698 TcpOutEloop] - Connected to idx=xx.xx.xx.xx:16313, pset=0, reuse=0. using ACK. 09-21-2023 16:36:48.003 +0200 INFO TailReader [7705 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log' 09-21-2023 16:37:10.613 +0200 INFO AutoLoadBalancedConnectionStrategy [7698 TcpOutEloop] - Connected to idx=xx.xx.xx.xx:16313, pset=0, reuse=0. using ACK. 09-21-2023 16:37:18.002 +0200 INFO TailReader [7705 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log'     my inputs.conf has only the following:   [default] host = myhostname index = vcenter-index-name [monitor:///var/log/remotelogs/vcenter-rep/analytics.log] sourcetype = "vcenter" queueSize = 50MB crcSalt = <SOURCE> disabled = false   I would mention that I have the same configuration on a different server and logs end out in splunk without a problem and this error does not appear on the other servers:   09-21-2023 16:37:18.002 +0200 INFO TailReader [7705 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log'  
Try: Your search | chart sparkline(count,1min) count by field  (more than 1min will generate a shorter sparkline)   BR. Norbert