All Topics

Top

All Topics

I have a dashboard where I have 4 multi select boxes and a input file with all possible results for each app.  When there are no results for an app it is sent as a 100%.  Problem is that the results ... See more...
I have a dashboard where I have 4 multi select boxes and a input file with all possible results for each app.  When there are no results for an app it is sent as a 100%.  Problem is that the results have all apps and ignore the multi-select because of the input file.  Below is the code.... data.environment.application data.environment.environment data.environment.stack data.componentId app1 prod AZ Acomp app1 prod AZ Bcomp app2 uat AW Zcomp app2 uat AW Ycomp app2 uat AW Xcomp app3 prod GC Mcomp   index=MINE data.environment.application="app2" data.environment.environment="uat" | eval estack="AW" | fillnull value="uat" estack data.environment.stack | where 'data.environment.stack'=estack | streamstats window=1 current=False global=False values(data.result) AS nextResult BY data.componentId | eval failureStart=if((nextResult="FAILURE" AND 'data.result'="SUCCESS"), "True", "False"), failureEnd=if((nextResult="SUCCESS" AND 'data.result'="FAILURE"), "True", "False") | transaction data.componentId, data.environment.application, data.environment.stack startswith="failureStart=True" endswith="failureEnd=True" maxpause=15m | stats sum(duration) as downtime by data.componentId | inputlookup append=true all_env_component.csv | fillnull value=0 | addinfo | eval uptime=(info_max_time - info_min_time)-downtime, avail=(uptime/(info_max_time - info_min_time))*100, downMins=round(downtime/60, 0) | rename data.componentId AS Component, avail AS Availability | fillnull value=100 Availability | dedup Component | table Component, Availability Thank you in advance for the help.
I want to add C:\windows\system32\winevt\logs\Microsoft-Windows-DriverFrameworks-UserMode/Operational  as a stanza in my inputs.conf. How do I write the stanza? Thank you
Is it possible in Splunk to have one props.conf file on one server's Universal Forwarder (UF) for a specific app, and another props.conf file on a different server for the same app, but with one file... See more...
Is it possible in Splunk to have one props.conf file on one server's Universal Forwarder (UF) for a specific app, and another props.conf file on a different server for the same app, but with one file masking a certain field and the other not?
I'm trying to achieve the following and hoped someone could help? I have a multivalue field that contains values that are colors, and would like to know how many fields contain duplicate colors, and... See more...
I'm trying to achieve the following and hoped someone could help? I have a multivalue field that contains values that are colors, and would like to know how many fields contain duplicate colors, and what the value of those colors are. e.g. my data colors blue blue red yellow red blue red blue red red green green Would return something like: duplicate_color duplicate_count blue 2 red 1 green 1 Because 'blue' is present as a duplicate in two entries, 'red' in one entry, and 'green' in one entry. 'yellow' is omitted because it is not a duplicate. Thank you very much for any help Steve
Hello, I have a splunk query returning my search results     index="demo1" source="demo2" | rex field=_raw "id_num \{ data: (?P<id_num>\d+) \}" | rex field=_raw "test_field_name=(?P<test_field_na... See more...
Hello, I have a splunk query returning my search results     index="demo1" source="demo2" | rex field=_raw "id_num \{ data: (?P<id_num>\d+) \}" | rex field=_raw "test_field_name=(?P<test_field_name>.+)]:" | search test_field_name=test_field_name_1 | table _raw id_num | reverse | filldown id_num     From above table  _raw may have *fail_msg1* or *fail_msg2* I have created a lookup file sample.csv with the following content     Product,Feature,FailureMsg ABC,DEF,fail_msg1 ABC,DEF,fail_msg2     I want to search if FailureMsg field (fail_msg1 OR fail_msg2) is found in _raw of my splunk query search results and return only those matching lines. If they (fail_msg1 OR fail_msg2) are not found, return nothing Could you please share how to write lookup or inputlookup for fetching these results? If those   
I have a mixed data of ADFS logs, mixed in the sense, I have non XML as well as XML formatted data in the same event. Now my requirement is to extract the field from XML format .   Ex:- <abc>WoW<... See more...
I have a mixed data of ADFS logs, mixed in the sense, I have non XML as well as XML formatted data in the same event. Now my requirement is to extract the field from XML format .   Ex:- <abc>WoW</abc> <xyz>SURE</xyz>   Now, both the lines are in the same event. I want to have two fields called "abc" and "xyz" with the corresponding value WoW and SURE.   Kindly help !!
I have two lookups, 1 with 460K rows and another with 10K rows.  I used join to get the 10K results from 460K rows, however join is not working and not returning any results.  I used table and stat... See more...
I have two lookups, 1 with 460K rows and another with 10K rows.  I used join to get the 10K results from 460K rows, however join is not working and not returning any results.  I used table and stats in both lookups though no results.    Below is the query I used:  | inputlookup unix.csv | eval sys_name = lower(FQDN) | join sys_name [| inputlookup inventory.csv | eval sys_name = lower("*".sys_name."*") | table Status sys_name host-ip  "DNS Name"  ] &  | inputlookup unix.csv | eval sys_name = lower(FQDN) |stats values(*) as * by sys_name | join sys_name [| inputlookup inventory.csv | eval sys_name = lower("*".sys_name."*") | table Status sys_name host-ip  "DNS Name"  ] Any help would be greatly appreciated. 
Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can benefit from: Dashboard Studio email exports, the new ability to schedule email exp... See more...
Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can benefit from: Dashboard Studio email exports, the new ability to schedule email exports from the Actions dropdown. Related Content linking Splunk Cloud Platform with Splunk Observability Cloud, the new ability to preview Splunk Observability Cloud infrastructure and application data related to an event investigated in the Splunk Cloud Platform Search and Reporting UI. Admins can benefit from: Field Filtering to limit access to confidential information by redacting or obfuscating fields in events within searches, with optional role-based exemptions. New ability to enable Okta as an IDP to automatically de-provision deactivated users in Splunk. Access to Ingest Actions on Splunk Cloud Platform for Google Cloud, including ‘filter’ ‘mask’ and ‘set index’ capabilities.  An improved Federated Search experience allowing customers to turn on/off a federated provider and an improved Data Scan Unit (DSU) consumption experience when running fixed parameter settings with absolute time filters. Enhanced Search Telemetry for Subsearches enabling Splunk support and engineers to more quickly diagnose and resolve issues related to subsearches. Check out the full release notes for more details. Python 2 is in the process of deprecation and soon will no longer be available in coming releases. jQuery v3.5 library is now set as the platform default; prior jQuery libraries are no longer supported.
Just scanning the $SPLUNK_HOME/etc/system/default/*.conf files for boolean values show a huge disparity.  "0" and "1" exceed "true/false" or "True/False" in commonality.  If linted against the .spec ... See more...
Just scanning the $SPLUNK_HOME/etc/system/default/*.conf files for boolean values show a huge disparity.  "0" and "1" exceed "true/false" or "True/False" in commonality.  If linted against the .spec files, most of these would fail.  Is there person that needs to see this to get it changed and self-consistent on the default values?  The vendor defaults should be the gold standard to measure against.  Any and all comments and how I might pursue resolution are welcome. 
Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, Splunk Enterprise, and Splunk Enterprise Security deployment and help strengthen an organ... See more...
Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, Splunk Enterprise, and Splunk Enterprise Security deployment and help strengthen an organization’s security program — no matter their current level of maturity. To help you mature your program even faster, we released version 3.8.0, which includes: An updated security data maturity journey that includes four levels instead of six stages Faster content searches and loading in security content  A new MITRE ATT&CK benchmarking dashboard to check how your detections stack up against the top 20 techniques To learn more about each of these updates, check out the full blog and release notes. Download Splunk Security Essentials on Splunkbase.
Hi Guys, Thanks in Advance, I have a task that  I need to pass parameter to splunk  from external website. And i already have dashboard .So based on correlationId we need to populate the result in ... See more...
Hi Guys, Thanks in Advance, I have a task that  I need to pass parameter to splunk  from external website. And i already have dashboard .So based on correlationId we need to populate the result in splunk.How to pass parameters from external website to splunk 
Hello all, can someone help me to to extract field 'CmdSet' from cisco ISE accouting logs. string : '[ CmdAV=show CmdArgAV=license CmdArgAV=usage CmdArgAV=<cr> ]'
i have a file with CRT extension from the third party. I am trying to convert the file into PEM but unable to get it done. there were various steps we performed but unable to get it converted. Please... See more...
i have a file with CRT extension from the third party. I am trying to convert the file into PEM but unable to get it done. there were various steps we performed but unable to get it converted. Please suggest.
I would like to ask about the server.conf and web.conf configuration files. how to place them in a clustered environment where there are 3 indexers and the cluster master stands alone? thanks for th... See more...
I would like to ask about the server.conf and web.conf configuration files. how to place them in a clustered environment where there are 3 indexers and the cluster master stands alone? thanks for the answers.
Hello Splunk Experts, Lets say i have a table that contains 2 columns as shown below: Name S_no aaa 1 ccc 3 bbb 2 ddd 4 eee 5 fff 6 ggg 1 iii 3 hhh 2 ... See more...
Hello Splunk Experts, Lets say i have a table that contains 2 columns as shown below: Name S_no aaa 1 ccc 3 bbb 2 ddd 4 eee 5 fff 6 ggg 1 iii 3 hhh 2 jjj 4 kkk 5 lll 6 mmm 1 ooo 3 nnn 2 ppp 4 qqq 5 rrr 6 Now, I need to sort every 6 rows of 's_no' column and populate the table. Something like this: Name S_no aaa 1 bbb 2 ccc 3 ddd 4 eee 5 fff 6 ggg 1 hhh 2 iii 3 jjj 4 kkk 5 lll 6 mmm 1 nnn 2 ooo 3 ppp 4 qqq 5 rrr 6 Could you please help me with the query? Much appreciated!
how to cleanup splunk space through script. I don't have a script. How to create a script to help clean up splunk space and delete old files. not getting reporting; it's not uploading any logs.
Hi Team, Our application is having a jvm deadlock and stopping indefinetly after splunk-library-javalogging upgrade from 1.6.1 to 1.11.1. Below is the logback, slf4j versions used. compile 'ch.qos... See more...
Hi Team, Our application is having a jvm deadlock and stopping indefinetly after splunk-library-javalogging upgrade from 1.6.1 to 1.11.1. Below is the logback, slf4j versions used. compile 'ch.qos.logback:logback-classic:1.2.3' compile 'ch.qos.logback:logback-core:1.2.3' compile 'org.slf4j:slf4j-api:1.7.36' compile 'net.logstash.logback:logstash-logback-encoder:6.6' compile "uk.org.lidalia:sysout-over-slf4j:1.0.2" And below is error deadlock thread stack default task-340 Name[default task-340]Thread ID[13667] Deadlocked on Lock[ch.qos.logback.core.spi.LogbackLock@31309b21] held by thread [splunk-tcp-appender] Thread ID[209] Thread stack [ ch.qos.logback.core.BasicStatusManager.fireStatusAddEvent(BasicStatusManager.java:85) ch.qos.logback.core.BasicStatusManager.add(BasicStatusManager.java:59) ch.qos.logback.core.spi.ContextAwareBase.addStatus(ContextAwareBase.java:80) ch.qos.logback.core.spi.ContextAwareBase.addInfo(ContextAwareBase.java:85) com.splunk.logging.TcpAppender.append(TcpAppender.java:291) com.splunk.logging.TcpAppender.append(TcpAppender.java:40) ch.qos.logback.core.AppenderBase.doAppend(AppenderBase.java:82) ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:51) ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:270) ch.qos.logback.classic.Logger.callAppenders(Logger.java:257) ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:421) ch.qos.logback.classic.Logger.filterAndLog_0_Or3Plus(Logger.java:383) ch.qos.logback.classic.Logger.info(Logger.java:579) uk.org.lidalia.sysoutslf4j.context.LogLevel$3.log(LogLevel.java:62) uk.org.lidalia.sysoutslf4j.context.LoggerAppenderImpl.log(LoggerAppenderImpl.java:81) uk.org.lidalia.sysoutslf4j.context.LoggerAppenderImpl.logOrPrint(LoggerAppenderImpl.java:71) uk.org.lidalia.sysoutslf4j.context.LoggerAppenderImpl.appendAndLog(LoggerAppenderImpl.java:58) uk.org.lidalia.sysoutslf4j.system.SLF4JPrintStreamDelegate.appendAndLog(SLF4JPrintStreamDelegate.java:76) uk.org.lidalia.sysoutslf4j.system.SLF4JPrintStreamDelegate.delegatePrintln(SLF4JPrintStreamDelegate.java:56) uk.org.lidalia.sysoutslf4j.system.SLF4JPrintStreamImpl.println(SLF4JPrintStreamImpl.java:111) com.lehman.elmo.admin.mvc.controller.ELMRestController.getFolder(ELMRestController.java:120) sun.reflect.GeneratedMethodAccessor116.invoke(Unknown Source) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:498) org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205) org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:150) org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:117) org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895) org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:808) org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1067) org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:963) org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006) org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:898) javax.servlet.http.HttpServlet.service(HttpServlet.java:503) org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883) javax.servlet.http.HttpServlet.service(HttpServlet.java:590) io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:74) io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129) com.lehman.admin.servlet.EtgAdminServletFilter.doFilter(EtgAdminServletFilter.java:141) io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:67) io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117) io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:67) io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:94) org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117) io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:67) io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) com.lehman.elmo.admin.SimpleCORSFilter.doFilter(SimpleCORSFilter.java:22) io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:67) io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) com.lehman.elmo.admin.ui.filter.ElmoRootFilter.doFilter(ElmoRootFilter.java:94) io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:67) io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) io.undertow.servlet.handlers.Filte default task-355 Name[default task-355]Thread ID[14086] Deadlocked on Lock[com.splunk.logging.TcpAppender@7a29da17] held by thread [default task-340] Thread ID[13667] Thread stack [ ch.qos.logback.core.AppenderBase.doAppend(AppenderBase.java:63) ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:51) ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:270) ch.qos.logback.classic.Logger.callAppenders(Logger.java:257) ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:421) ch.qos.logback.classic.Logger.filterAndLog_0_Or3Plus(Logger.java:383) ch.qos.logback.classic.Logger.log(Logger.java:765) org.apache.logging.slf4j.SLF4JLogger.logMessage(SLF4JLogger.java:234) org.apache.logging.log4j.spi.AbstractLogger.log(AbstractLogger.java:2117) org.apache.logging.log4j.spi.AbstractLogger.tryLogMessage(AbstractLogger.java:2205) org.apache.logging.log4j.spi.AbstractLogger.logMessageTrackRecursion(AbstractLogger.java:2159) org.apache.logging.log4j.spi.AbstractLogger.logMessageSafely(AbstractLogger.java:2142) org.apache.logging.log4j.spi.AbstractLogger.logMessage(AbstractLogger.java:2017) org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:1983) org.apache.logging.log4j.spi.AbstractLogger.info(AbstractLogger.java:1320) com.lehman.elmo.admin.ui.filter.ElmoRootFilter.doFilter(ElmoRootFilter.java:90) io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:67) io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84) io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62) io.undertow.servlet.handlers.ServletChain$1.handleRequest(ServletChain.java:68) io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36) org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78) io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) io.undertow.servlet.handlers.RedirectDirHandler.handleRequest(RedirectDirHandler.java:68) io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:117) io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57) io.undertow.server.handlers.DisableCacheHandler.handleRequest(DisableCacheHandler.java:33) io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) io.undertow.security.handlers.AuthenticationConstraintHandler.handleRequest(AuthenticationConstraintHandler.java:53) io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46) io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64) io.undertow.servlet.handlers.security.ServletSecurityConstraintHandler.handleRequest(ServletSecurityConstraintHandler.java:60) io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60) io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77) io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50) io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43) io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61) io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) org.wildfly.extension.undertow.deployment.GlobalRequestControllerHandler.handleRequest(GlobalRequestControllerHandler.java:68) io.undertow.servlet.handlers.SendErrorPageHandler.handleRequest(SendErrorPageHandler.java:52) io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) io.undertow.server.handlers.MetricsHandler.handleRequest(MetricsHandler.java:64) io.undertow.servlet.core.MetricsChainHandler.handleRequest(MetricsChainHandler.java:59) io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:275) io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletInitialHandler.java:79) io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:134) io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:131) io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:48) io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.jav splunk-tcp-appender Name[splunk-tcp-appender]Thread ID[209] Deadlocked on Lock[uk.org.lidalia.sysoutslf4j.system.SLF4JPrintStreamImpl@5a3259b] held by thread [default task-340] Thread ID[13667] Thread stack [ uk.org.lidalia.sysoutslf4j.system.SLF4JPrintStreamImpl.print(SLF4JPrintStreamImpl.java:246) ch.qos.logback.core.status.OnPrintStreamStatusListenerBase.print(OnPrintStreamStatusListenerBase.java:52) ch.qos.logback.core.status.OnPrintStreamStatusListenerBase.addStatusEvent(OnPrintStreamStatusListenerBase.java:58) ch.qos.logback.core.BasicStatusManager.fireStatusAddEvent(BasicStatusManager.java:87) ch.qos.logback.core.BasicStatusManager.add(BasicStatusManager.java:59) ch.qos.logback.core.spi.ContextAwareBase.addStatus(ContextAwareBase.java:80) ch.qos.logback.core.spi.ContextAwareBase.addInfo(ContextAwareBase.java:85) com.splunk.logging.TcpAppender.run(TcpAppender.java:130) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) ]
Hi Team Can some one help me ,what i should do to migrate items from on-premise SH to Splunk cloud ? Like Lookup,ES use cases ,alerts created in search and reporting , reports and dashboards ? ... See more...
Hi Team Can some one help me ,what i should do to migrate items from on-premise SH to Splunk cloud ? Like Lookup,ES use cases ,alerts created in search and reporting , reports and dashboards ? Any step based resolution will be helpful . Can i do it alone or would it require on-demand support from splunk to do these migration ?
Request: Dynamically start and stop the java agent.   We are running version 20.10.0.31173 on Java 8.  Please, we know we are behind on versions this is a separate issue.  ------------------- What... See more...
Request: Dynamically start and stop the java agent.   We are running version 20.10.0.31173 on Java 8.  Please, we know we are behind on versions this is a separate issue.  ------------------- What we have observed when starting Wildfly 26.0.1 is the following:  The alert condition for 'keycloak.war failed to deploy' was triggered. Alert:Search String:Trigger Time: Keycloak.war failed to deploy sourcetype=btierservice "parameter 0 of method repositorySchemaController" 07:02:34 -0400 on March 26, 2024. This is an intermittent problem at the time.  The investigation considers several aspects of the environment, and we would like to control one pertaining to the start/stop of Appd.   So, is there a way to start Appd after the application has been initialized? Thus ruling out Appd.
I have tried installing Splunk Enterprise 9.2.0.1 on my Linux to use as a Forwarder tier But when I configure deployment clients from Universal Forwarder, the device list is not found. On the Forward... See more...
I have tried installing Splunk Enterprise 9.2.0.1 on my Linux to use as a Forwarder tier But when I configure deployment clients from Universal Forwarder, the device list is not found. On the Forwarder Management page So I tried to test it by uninstalling Splunk Enterprise 9.2.0.1 on Linux and installing Splunk Enterprise 9.1.3. Then I configured deployment clients from Universal Forwarder and found a list of devices. On the Forwarder Management page So I'm wondering why I can't find the device entry. On the Forwarder page Management on Splunk Enterprise 9.2.0.1