All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have below message in the splunk log   Ex : s1 event has been received for customer 15778 S2 event has been received for customer 15778 S3 event has been received for customer 15778   I want ... See more...
I have below message in the splunk log   Ex : s1 event has been received for customer 15778 S2 event has been received for customer 15778 S3 event has been received for customer 15778   I want to check all S1,S2,S3 event has been received message present in the particular customer.i used AND condition but not able to achieve.plesse help me on this. As per my scenario,if i have 1 lakhs customer, i want to check for all 3 events has been received mesage is present in the splunk log for one particular customer.if not present all 3 mesage i need to set up alert.
When I install TA-Demisto App in single Addhoc SH node, it works fine, but when install in using SH Deployer for SH Cluster nodes, does not work and gives me Error:   Configuration page failed to l... See more...
When I install TA-Demisto App in single Addhoc SH node, it works fine, but when install in using SH Deployer for SH Cluster nodes, does not work and gives me Error:   Configuration page failed to load, the server reported internal errors which may indicate you do not have access to this page. Error: Request failed with status code 500 ERR0002 ==> Any idea how to fix that error ?
I have installed events service in same EC, controller and EUM node for demo purpose but after event service installation the service shows as critical in and then stopped.  Please advise what could... See more...
I have installed events service in same EC, controller and EUM node for demo purpose but after event service installation the service shows as critical in and then stopped.  Please advise what could the reason due for this. I have installed the platform componets on windows server 2019 os.  [2023-12-03T12:11:14,393+03:00]  [ERROR]  [dw-58 - PUT /entitysearch/sync]  [c.a.a.s.r.e.j.m.UnknownExceptionMapper]  Unknown exception occurred while processing HTTP request. logCorrelationId=[946fed22-05db-4370-aa1f-3ce469cb4689] [{}] java.lang.RuntimeException: java.net.ConnectException at com.appdynamics.analytics.processor.entitysearch.store.EntitySearchElasticStore.sync(EntitySearchElasticStore.java:114) at com.appdynamics.analytics.processor.entitysearch.resource.EntitySearchResourceImpl.sync(EntitySearchResourceImpl.java:112) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.base/java.lang.reflect.Method.invoke(Unknown Source) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$VoidOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:159) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79) at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:469) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:391) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:80) at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:255) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) at org.glassfish.jersey.internal.Errors.process(Errors.java:292) at org.glassfish.jersey.internal.Errors.process(Errors.java:274) at org.glassfish.jersey.internal.Errors.process(Errors.java:244) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265) at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:234) at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680) at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:394) at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:366) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:319) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799) at org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1656) at io.dropwizard.servlets.ThreadNameFilter.doFilter(ThreadNameFilter.java:35) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626) at io.dropwizard.jersey.filter.AllowedMethodsFilter.handle(AllowedMethodsFilter.java:47) at io.dropwizard.jersey.filter.AllowedMethodsFilter.doFilter(AllowedMethodsFilter.java:41) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626) at com.appdynamics.common.framework.util.ClickJackSecurityFilter.doFilter(ClickJackSecurityFilter.java:91) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:552) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1440) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:505) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1355) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at com.codahale.metrics.jetty9.InstrumentedHandler.handle(InstrumentedHandler.java:313) at io.dropwizard.jetty.RoutingHandler.handle(RoutingHandler.java:52) at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:722) at org.eclipse.jetty.server.handler.RequestLogHandler.handle(RequestLogHandler.java:54) at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:181) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.Server.handle(Server.java:516) at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487) at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105) at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) at java.base/java.lang.Thread.run(Unknown Source) Caused by: java.net.ConnectException: null at org.elasticsearch.client.RestClient.extractAndWrapCause(RestClient.java:930) at org.elasticsearch.client.RestClient.performRequest(RestClient.java:300) at org.elasticsearch.client.RestClient.performRequest(RestClient.java:288) at co.elastic.clients.transport.rest_client.RestClientTransport.performRequest(RestClientTransport.java:147) at co.elastic.clients.elasticsearch.ElasticsearchClient.bulk(ElasticsearchClient.java:319) at co.elastic.clients.elasticsearch.ElasticsearchClient.bulk(ElasticsearchClient.java:336) at com.appdynamics.analytics.processor.entitysearch.store.EntitySearchElasticStore.sync(EntitySearchElasticStore.java:107) ... 70 common frames omitted Caused by: java.net.ConnectException: null at org.apache.http.nio.pool.RouteSpecificPool.timeout(RouteSpecificPool.java:168) at org.apache.http.nio.pool.AbstractNIOConnPool.requestTimeout(AbstractNIOConnPool.java:561) at org.apache.http.nio.pool.AbstractNIOConnPool$InternalSessionRequestCallback.timeout(AbstractNIOConnPool.java:822) at org.apache.http.impl.nio.reactor.SessionRequestImpl.timeout(SessionRequestImpl.java:183) at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processTimeouts(DefaultConnectingIOReactor.java:210) at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:155) at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:348) at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:191) at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64) ... 1 common frames omitted [2023-12-03T12:11:30,392+03:00]  [ERROR]  [dw-58 - GET /v1/elasticsearch/numberOfDataNodes]  [c.a.a.s.r.e.j.m.UnknownExceptionMapper]  Unknown exception occurred while processing HTTP request. logCorrelationId=[a3823670-1430-4185-b355-b5c3be28244e] [{}] java.lang.RuntimeException: java.net.ConnectException at com.appdynamics.analytics.elasticsearch.client.ElasticSearchClientFacade.getDetailedClusterHealth(ElasticSearchClientFacade.java:388) at com.appdynamics.analytics.processor.elasticsearch.settings.ElasticSearchSettingsResource.getNumDataNodes(ElasticSearchSettingsResource.java:77) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.base/java.lang.reflect.Method.invoke(Unknown Source) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:219) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79) at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:469) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:391) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:80) at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:255) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) at org.glassfish.jersey.internal.Errors.process(Errors.java:292) at org.glassfish.jersey.internal.Errors.process(Errors.java:274) at org.glassfish.jersey.internal.Errors.process(Errors.java:244) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265) at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:234) at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680) at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:394) at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:366) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:319) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799) at org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1656) at io.dropwizard.servlets.ThreadNameFilter.doFilter(ThreadNameFilter.java:35) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626) at io.dropwizard.jersey.filter.AllowedMethodsFilter.handle(AllowedMethodsFilter.java:47) at io.dropwizard.jersey.filter.AllowedMethodsFilter.doFilter(AllowedMethodsFilter.java:41) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626) at com.appdynamics.common.framework.util.ClickJackSecurityFilter.doFilter(ClickJackSecurityFilter.java:91) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:552) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1440) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:505) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1355) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at com.codahale.metrics.jetty9.InstrumentedHandler.handle(InstrumentedHandler.java:313) at io.dropwizard.jetty.RoutingHandler.handle(RoutingHandler.java:52) at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:772) at org.eclipse.jetty.server.handler.RequestLogHandler.handle(RequestLogHandler.java:54) at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:181) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.Server.handle(Server.java:516) at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487) at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105) at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) at java.base/java.lang.Thread.run(Unknown Source) Caused by: java.net.ConnectException: null at org.elasticsearch.client.RestClient.extractAndWrapCause(RestClient.java:930) at org.elasticsearch.client.RestClient.performRequest(RestClient.java:300) at org.elasticsearch.client.RestClient.performRequest(RestClient.java:288) at co.elastic.clients.transport.rest_client.RestClientTransport.performRequest(RestClientTransport.java:147) at co.elastic.clients.elasticsearch.cluster.ElasticsearchClusterClient.health(ElasticsearchClusterClient.java:334) at com.appdynamics.analytics.elasticsearch.admin.ElasticSearchAdminFacade.getDetailedClusterHealth(ElasticSearchAdminFacade.java:122) at com.appdynamics.analytics.elasticsearch.client.ElasticSearchClientFacade.getDetailedClusterHealth(ElasticSearchClientFacade.java:386) ... 70 common frames omitted Caused by: java.net.ConnectException: null at org.apache.http.nio.pool.RouteSpecificPool.timeout(RouteSpecificPool.java:168) at org.apache.http.nio.pool.AbstractNIOConnPool.requestTimeout(AbstractNIOConnPool.java:561) at org.apache.http.nio.pool.AbstractNIOConnPool$InternalSessionRequestCallback.timeout(AbstractNIOConnPool.java:822) at org.apache.http.impl.nio.reactor.SessionRequestImpl.timeout(SessionRequestImpl.java:183) at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processTimeouts(DefaultConnectingIOReactor.java:210) at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:155) at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:348) at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:191) at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64) ... 1 common frames omitted [2023-12-03T12:11:31,443+03:00]  [ERROR]  [dw-59 - GET /v1/elasticsearch/numberOfDataNodes]  [c.a.a.s.r.e.j.m.UnknownExceptionMapper]  Unknown exception occurred while processing HTTP request. logCorrelationId=[54da9d56-e317-4276-8488-8358eb796a14] [{}] java.lang.RuntimeException: java.net.ConnectException at com.appdynamics.analytics.elasticsearch.client.ElasticSearchClientFacade.getDetailedClusterHealth(ElasticSearchClientFacade.java:388) at com.appdynamics.analytics.processor.elasticsearch.settings.ElasticSearchSettingsResource.getNumDataNodes(ElasticSearchSettingsResource.java:77) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.base/java.lang.reflect.Method.invoke(Unknown Source) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:219) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79) at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:469) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:391) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:80) at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:255) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) at org.glassfish.jersey.internal.Errors.process(Errors.java:292) at org.glassfish.jersey.internal.Errors.process(Errors.java:274) at org.glassfish.jersey.internal.Errors.process(Errors.java:244) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265) at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:234) at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680) at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:394) at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:366) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:319) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799) at org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1656) at io.dropwizard.servlets.ThreadNameFilter.doFilter(ThreadNameFilter.java:35) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626) at io.dropwizard.jersey.filter.AllowedMethodsFilter.handle(AllowedMethodsFilter.java:47) at io.dropwizard.jersey.filter.AllowedMethodsFilter.doFilter(AllowedMethodsFilter.java:41) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626) at com.appdynamics.common.framework.util.ClickJackSecurityFilter.doFilter(ClickJackSecurityFilter.java:91) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:552) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1440) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:505) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1355) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at com.codahale.metrics.jetty9.InstrumentedHandler.handle(InstrumentedHandler.java:313) at io.dropwizard.jetty.RoutingHandler.handle(RoutingHandler.java:52) at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:772) at org.eclipse.jetty.server.handler.RequestLogHandler.handle(RequestLogHandler.java:54) at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:181) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.Server.handle(Server.java:516) at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487) at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105) at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) at java.base/java.lang.Thread.run(Unknown Source) Caused by: java.net.ConnectException: null at org.elasticsearch.client.RestClient.extractAndWrapCause(RestClient.java:930) at org.elasticsearch.client.RestClient.performRequest(RestClient.java:300) at org.elasticsearch.client.RestClient.performRequest(RestClient.java:288) at co.elastic.clients.transport.rest_client.RestClientTransport.performRequest(RestClientTransport.java:147) at co.elastic.clients.elasticsearch.cluster.ElasticsearchClusterClient.health(ElasticsearchClusterClient.java:334) at com.appdynamics.analytics.elasticsearch.admin.ElasticSearchAdminFacade.getDetailedClusterHealth(ElasticSearchAdminFacade.java:122) at com.appdynamics.analytics.elasticsearch.client.ElasticSearchClientFacade.getDetailedClusterHealth(ElasticSearchClientFacade.java:386) ... 70 common frames omitted Caused by: java.net.ConnectException: null at org.apache.http.nio.pool.RouteSpecificPool.timeout(RouteSpecificPool.java:168) at org.apache.http.nio.pool.AbstractNIOConnPool.requestTimeout(AbstractNIOConnPool.java:561) at org.apache.http.nio.pool.AbstractNIOConnPool$InternalSessionRequestCallback.timeout(AbstractNIOConnPool.java:822) at org.apache.http.impl.nio.reactor.SessionRequestImpl.timeout(SessionRequestImpl.java:183) at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processTimeouts(DefaultConnectingIOReactor.java:210) at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:155) at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:348) at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:191) at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64) ... 1 common frames omitted [2023-12-03T12:11:41,557+03:00]  [WARN ]  [health-report-thread-0]  [c.a.c.u.health.HealthReporterModule]  Report unhealthy [ deadlocks: (healthy)    events-service-api-store / BizOutcome [stage.extract]: (healthy) Rates (Avg per second. Avg of last 5 min) accepted events: [0.000000], discarded events: [0.000000], acceptance ratio: [0.000000]   events-service-api-store / BizOutcome [stage.filter]: (healthy) Rates (Avg per second. Avg of last 5 min) accepted events: [0.000000], discarded events: [0.000000], acceptance ratio: [0.000000]   events-service-api-store / BizOutcome [stage.parse]: (healthy) Rates (Avg per second. Avg of last 5 min) accepted events: [0.000000], discarded events: [0.000000], acceptance ratio: [0.000000]   events-service-api-store / BizOutcome [stage.upsert]: (healthy) Rates (Avg per second. Avg of last 5 min) batch size: [0.000000], success: [0.000000], failure: [0.000000]   events-service-api-store / Build information: (healthy) buildName=release/23.7.0.onprem.next-Analytics-release2370onpremnext-23.7.0-246, buildHash=aee38fe8666e1090e98c850a093335567ad2bad4, buildTimestamp=2023-08-03T12:19:23+0000, buildNumber=23.7.0-246, jobName=Analytics-release2370onpremnext, version=23.7.0-246   events-service-api-store / Configuration properties: (healthy) Dynamic properties: [query.default.pagination.size=1000, query.default.scroll.batch.size=1000, ad.accountmanager.accountConfigCacheExpireSeconds=900, query.funnel.batch.scroll.expiry.time.millis=60000, query.default.pagination.offset=0, ad.accountmanager.key.admin_service=REDACTED, ad.es.event.index.maxBulkUpdateSizeBytes=5000000, query.default.results.limit=100, query.nested.enable=false, query.max.bucket.nesting.level=5, ad.es.event.maxUpsertRequestBodySizeBytes=5000000, ad.accountmanager.key.account_service=REDACTED, ad.es.event.index.maxBulkUpdateNumDocs=200, query.exact.analyzed.wildcard.enable=true, query.validator.max.batch.count=20, query.max.forced.range.size=2000, ad.accountmanager.key.disabledKeys=REDACTED, ad.es.event.index.fieldNumberIncrement=500, ad.accountmanager.cacheSize=5000, query.default.aggregation.level.limit=10, schema.validator.max.custom.events.per.account=20, query.max.pagination.offset=10000, ad.metric.processor.enabled.retry.onMpFailure=true, query.max.pagination.size=10000, query.max.results.series.limit=30000, ad.es.response.filter.double.bucketFunctionsRequiredToConvertKeyToLong=REDACTED, query.max.concat.arguments=100, ad.accountmanager.eumAccountCacheExpireSeconds=3600, query.max.results.limit=10000, query.standard.search.call.timeout.millis=300000, query.max.aggregation.level.limit=100, query.funnel.batch.enable.processing=false, ad.env=dev, ad.feature.adql.454.functions.values={"all": ["now","toDate","toFloat","toInt","toString","ifNull","round"], "none": []}, ad.es.event.index.fieldIncrementThresholdPercentage=80, ad.metric.processor.enabled.clusters=[], query.max.export.threads=10, ad.es.cluster.name=[appdynamics-events-service-cluster], ad.metric.processor.enabled.accounts=[], ad.accountmanager.keyNamesCSV=REDACTED, query.max.export.results.limit=65000, ad.es.response.filter.percentile.values.nullToZero=false, query.max.aggregation.level.limit.customers=10000, ad.metric.processor.enabled.configCreateAll=false, ad.es.event.index.isDocumentReplaceEnabled=false, ad.metric.processor.enabled.customEvents=false, query.max.scroll.batch.size=10000, ad.es.rolling.maxShardsPerIndex=25, query.inner.hits.size.limit=100, query.aggressive.search.call.timeout.millis=30000, ad.es.request.filter.partialPathsMap={"browserrecord": {"domreadytime": "metrics.domreadytime","enduserresponsetime": "metrics.enduserresponsetime","firstbytetime": "metrics.firstbytetime"},"mobilesessionrecord": {"durationMS": "metrics.durationMS"},"sessionrecord": {"durationMS": "metrics.durationMS","pagename": "browserRecords.pagename","pagetype": "browserRecords.pagetype"}}, ad.es.event.maxPublishRequestBodySizeBytes=1000000, ad.accountmanager.key.service=REDACTED, query.funnel.batch.scroll.size=10000, ad.es.event.index.maxFieldsPerIndex=3000, query.scroll.mode.enable=true, query.funnel.batch.concurrency.limit=3, ad.accountmanager.key.deprecatedKeys=REDACTED, ad.accountmanager.key.controller=REDACTED, ad.es.healthCheck.reservoir.expDecayFactor=0.015, ad.es.healthCheck.updateInterval=1 minutes, ad.metric.processor.disabled.accounts=[], query.default.scroll.expiry.millis=60000, ad.accountmanager.key.jf=REDACTED, ad.es.response.filter.boolean.eventTypesToFilter=[mobilesnapshot, mobilecrashreport, sessionrecord, mobilesessionrecord, synthsessionrecord], query.funnel.internal.results.limit=50000, query.max.top.level.aggregation.level.limit=1000, ad.es.healthCheck.reservoir.size=100, query.reject.time.unbounded.queries=false, ad.accountmanager.key.ops=REDACTED, ad.feature.adql.454.functions.envs={"default": "none", "dev": "all"}, ad.accountmanager.key.slm=REDACTED, query.cardinality.precision.threshold=-1, publish.validator.max.fields.per.event=255, ad.accountmanager.key.mds=REDACTED, ad.accountmanager.key.eum=REDACTED]   events-service-api-store / Connection to ElasticSearch: (unhealthy) java.net.ConnectException java.lang.RuntimeException: java.net.ConnectException at com.appdynamics.analytics.elasticsearch.client.ElasticSearchClientFacade.getNodesStatsResponse(ElasticSearchClientFacade.java:451) at com.appdynamics.analytics.processor.elasticsearch.clusters.ClusterHealthStateReporter.isPartOfCluster(ClusterHealthStateReporter.java:38) at com.appdynamics.analytics.processor.elasticsearch.clusters.AbstractClusterHealthStateReporter.updateHistogram(AbstractClusterHealthStateReporter.java:106) at com.appdynamics.analytics.processor.elasticsearch.clusters.AbstractClusterHealthStateReporter.updateHealthCheckResult(AbstractClusterHealthStateReporter.java:93) at com.appdynamics.common.util.health.AsynchronousHealthCheckable.runUpdateHealthCheckResult(AsynchronousHealthCheckable.java:123) at com.appdynamics.common.util.health.AsynchronousHealthCheckable$2.run(AsynchronousHealthCheckable.java:147) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) at java.base/java.util.concurrent.FutureTask.runAndReset(Unknown Source) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.base/java.lang.Thread.run(Unknown Source) Caused by: java.net.ConnectException at org.elasticsearch.client.RestClient.extractAndWrapCause(RestClient.java:930) at org.elasticsearch.client.RestClient.performRequest(RestClient.java:300) at org.elasticsearch.client.RestClient.performRequest(RestClient.java:288) at com.appdynamics.analytics.elasticsearch.admin.ElasticSearchAdminFacade.getNodesStatsResponse(ElasticSearchAdminFacade.java:443) at com.appdynamics.analytics.elasticsearch.client.ElasticSearchClientFacade.getNodesStatsResponse(ElasticSearchClientFacade.java:449) ... 11 more Caused by: java.net.ConnectException at org.apache.http.nio.pool.RouteSpecificPool.timeout(RouteSpecificPool.java:168) at org.apache.http.nio.pool.AbstractNIOConnPool.requestTimeout(AbstractNIOConnPool.java:561) at org.apache.http.nio.pool.AbstractNIOConnPool$InternalSessionRequestCallback.timeout(AbstractNIOConnPool.java:822) at org.apache.http.impl.nio.reactor.SessionRequestImpl.timeout(SessionRequestImpl.java:183) at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processTimeouts(DefaultConnectingIOReactor.java:210) at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:155) at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:348) at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:191) at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64) ... 1 more     events-service-api-store / Connection to [http://0.0.0.0:9080/biz_outcome] with [DefaultBizOutcomeDefinitionClient]: (healthy)    events-service-api-store / Connection to [http://0.0.0.0:9080/events/query] with [DefaultAsyncQueryEventsClient]: (healthy)    events-service-api-store / Connection to [http://0.0.0.0:9080/events/query] with [DefaultQueryEventsClient]: (healthy)    events-service-api-store / Connection to [http://0.0.0.0:9080/v1/account] with [DefaultAccountServiceClient]: (healthy)    events-service-api-store / Connection to [http://0.0.0.0:9080/v1/accounts/meters] with [RestMeterServiceClient]: (healthy)    events-service-api-store / Connection to [http://0.0.0.0:9080/v1/admin/accounts] with [RestAccountsServiceClient]: (healthy)    events-service-api-store / Connection to [http://0.0.0.0:9080/v1/admin] with [AsyncJobScannerClient]: (healthy)    events-service-api-store / Connection to [http://0.0.0.0:9080/v1/admin] with [DefaultEventServiceAdminClient]: (healthy)    events-service-api-store / Connection to [http://0.0.0.0:9080/v1/jf] with [RestJobFrameworkClient]: (healthy)    events-service-api-store / Connection to [http://0.0.0.0:9080/v1/querymanagement/cancel] with [RestQueryManagementClient]: (healthy)    events-service-api-store / Connection to [http://0.0.0.0:9080/v1/slm] with [RestSlmPerfConfigsClient]: (healthy)    events-service-api-store / Connection to [http://0.0.0.0:9080/v3/accounts] with [RestEventTypeClient]: (healthy)    events-service-api-store / Connection to [http://0.0.0.0:9080/v3/events] with [DefaultEventServiceClient]: (healthy)    events-service-api-store / Resource [_ping - GET]: (healthy) Rates (Avg per second. Avg of last 5 min) success: [0.122272], user error: [0.000000], timeout: [0.000000], error: [0.000000]   events-service-api-store / Resource [entitysearch/sync - PUT]: (healthy) Rates (Avg per second. Avg of last 5 min) success: [0.000000], user error: [0.000000], timeout: [0.000000], error: [0.074595]   events-service-api-store / Resource [v1/account - POST]: (healthy) Rates (Avg per second. Avg of last 5 min) success: [0.000000], user error: [0.000000], timeout: [0.000000], error: [0.059934]   events-service-api-store / Resource [v1/elasticsearch/numberOfDataNodes - GET]: (healthy) Rates (Avg per second. Avg of last 5 min) success: [0.000000], user error: [0.000000], timeout: [0.000000], error: [0.053746]   events-service-api-store / Resource [v1/store/report - GET]: (healthy) Rates (Avg per second. Avg of last 5 min) success: [0.074411], user error: [0.000000], timeout: [0.000000], error: [0.000000]   events-service-api-store / SystemAccessKeyAuthHandler: (healthy) deprecated: [], disabled: []   events-service-api-store / jobframework-module: (unhealthy) Job Framework instanceId: [SYS2.ZOOM.COM], number of jobs executed: [0], running since [<null>], currently executing jobs []   events-service-api-store / queues: (healthy) [1] queues [[biz-outcome-incoming-events] ratio: [0.00], size: [0], capacity: [1000]] ] [2023-12-03T12:11:47,688+03:00]  [INFO ]  [cache-refresh-scheduler-0]  [c.a.a.p.event.meter.DefaultMeters]  Quota refresh executor stats : [java.util.concurrent.ThreadPoolExecutor@29bd2b0f[Running, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]] [2023-12-03T12:11:47,688+03:00]  [INFO ]  [cache-refresh-scheduler-0]  [c.a.a.p.event.meter.DefaultMeters]  Submitted [0] tasks to refresh the quota remaining caches [2023-12-03T12:11:48,353+03:00]  [ERROR]  [main]  [c.a.a.p.e.n.s.ElasticsearchDependencyModule]  Elasticsearch was never healthy. com.github.rholder.retry.RetryException: Retrying failed to complete successfully after 40 attempts. at com.github.rholder.retry.Retryer.call(Retryer.java:174) at com.appdynamics.analytics.processor.elasticsearch.node.single.ElasticsearchDependencyModule.waitForHealthiness(ElasticsearchDependencyModule.java:96) at com.appdynamics.analytics.processor.elasticsearch.node.single.ElasticsearchDependencyModule.waitForHealthinessAndHandle(ElasticsearchDependencyModule.java:65) at com.appdynamics.analytics.processor.elasticsearch.node.single.ElasticsearchDependencyModule.lambda$registerDependencyMonitor$0(ElasticsearchDependencyModule.java:50) at io.dropwizard.lifecycle.setup.LifecycleEnvironment$ServerListener.lifeCycleStarted(LifecycleEnvironment.java:117) at org.eclipse.jetty.util.component.AbstractLifeCycle.setStarted(AbstractLifeCycle.java:194) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:74) at io.dropwizard.cli.ServerCommand.run(ServerCommand.java:53) at io.dropwizard.cli.EnvironmentCommand.run(EnvironmentCommand.java:45) at io.dropwizard.cli.ConfiguredCommand.run(ConfiguredCommand.java:87) at io.dropwizard.cli.Cli.run(Cli.java:78) at io.dropwizard.Application.run(Application.java:94) at com.appdynamics.common.framework.AbstractApp.callRunServer(AbstractApp.java:274) at com.appdynamics.common.framework.AbstractApp.runUsingFile(AbstractApp.java:268) at com.appdynamics.common.framework.AbstractApp.runUsingTemplate(AbstractApp.java:255) at com.appdynamics.common.framework.AbstractApp.runUsingTemplate(AbstractApp.java:175) at com.appdynamics.analytics.processor.AnalyticsService.main(AnalyticsService.java:76) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.base/java.lang.reflect.Method.invoke(Unknown Source) at com.appdynamics.common.executor.command.windows.AppRunServiceInternalCommand.execute(AppRunServiceInternalCommand.java:91) at com.appdynamics.common.executor.CommandExecutor.execute(CommandExecutor.java:38) at com.appdynamics.analytics.processor.executor.AnalyticsServiceExecutor.main(AnalyticsServiceExecutor.java:99) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.base/java.lang.reflect.Method.invoke(Unknown Source) at com.appdynamics.common.executor.standalone.ProxyMain.callActualMain(ProxyMain.java:165) at com.appdynamics.common.executor.standalone.ProxyMain.main(ProxyMain.java:106) Caused by: java.net.ConnectException: null at org.elasticsearch.client.RestClient.extractAndWrapCause(RestClient.java:930) at org.elasticsearch.client.RestClient.performRequest(RestClient.java:300) at org.elasticsearch.client.RestClient.performRequest(RestClient.java:288) at co.elastic.clients.transport.rest_client.RestClientTransport.performRequest(RestClientTransport.java:147) at co.elastic.clients.elasticsearch.indices.ElasticsearchIndicesClient.exists(ElasticsearchIndicesClient.java:620) at co.elastic.clients.elasticsearch.indices.ElasticsearchIndicesClient.exists(ElasticsearchIndicesClient.java:636) at com.appdynamics.analytics.processor.util.startup.StartupHelpers.startupIndexExists(StartupHelpers.java:107) at com.appdynamics.analytics.processor.util.startup.StartupHelpers.upsertStartupIndex(StartupHelpers.java:97) at com.appdynamics.analytics.processor.util.startup.StartupHelpers.lambda$isElasticsearchHealthy$2(StartupHelpers.java:87) at com.appdynamics.common.util.locks.InterProcessClusterLock.acquireAndExecute(InterProcessClusterLock.java:35) at com.appdynamics.analytics.processor.util.startup.StartupHelpers.isElasticsearchHealthy(StartupHelpers.java:85) at com.appdynamics.analytics.processor.elasticsearch.node.single.ElasticsearchDependencyModule.lambda$waitForHealthiness$2(ElasticsearchDependencyModule.java:101) at com.github.rholder.retry.AttemptTimeLimiters$NoAttemptTimeLimit.call(AttemptTimeLimiters.java:78) at com.github.rholder.retry.Retryer.call(Retryer.java:160) ... 29 common frames omitted Caused by: java.net.ConnectException: null at org.apache.http.nio.pool.RouteSpecificPool.timeout(RouteSpecificPool.java:168) at org.apache.http.nio.pool.AbstractNIOConnPool.requestTimeout(AbstractNIOConnPool.java:561) at org.apache.http.nio.pool.AbstractNIOConnPool$InternalSessionRequestCallback.timeout(AbstractNIOConnPool.java:822) at org.apache.http.impl.nio.reactor.SessionRequestImpl.timeout(SessionRequestImpl.java:183) at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processTimeouts(DefaultConnectingIOReactor.java:210) at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:155) at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:348) at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:191) at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64) at java.base/java.lang.Thread.run(Unknown Source) [2023-12-03T12:11:48,354+03:00]  [ERROR]  [main]  [c.a.a.p.e.n.s.ElasticsearchDependencyModule]  Elasticsearch never successfully started up, stopping server startup. [2023-12-03T12:11:48,356+03:00]  [INFO ]  [main]  [c.a.common.framework.util.SimpleApp]  Stopping [events-service-api-store] [2023-12-03T12:11:48,384+03:00]  [INFO ]  [main]  [o.e.jetty.server.AbstractConnector]  Stopped application@4cf1ec2{HTTP/1.1, (http/1.1)}{0.0.0.0:9080} [2023-12-03T12:11:48,385+03:00]  [INFO ]  [main]  [o.e.jetty.server.AbstractConnector]  Stopped admin@16eb01d4{HTTP/1.1, (http/1.1)}{0.0.0.0:9081} [2023-12-03T12:11:48,385+03:00]  [INFO ]  [main]  [o.e.j.server.handler.ContextHandler]  Stopped i.d.j.MutableServletContextHandler@40bac624{/,null,STOPPED} [2023-12-03T12:11:48,385+03:00]  [INFO ]  [main]  [o.e.j.server.handler.ContextHandler]  Stopped i.d.j.MutableServletContextHandler@19740583{/,null,STOPPED} [2023-12-03T12:11:48,398+03:00]  [INFO ]  [main]  [c.a.a.p.c.m.s.DefaultMetricCorrelationService]  Stopped metric correlation service. [2023-12-03T12:11:48,398+03:00]  [INFO ]  [main]  [c.a.a.p.c.m.s.FileBasedCorrelationPersistor]  Stopped metric correlation file cleanup thread. [2023-12-03T12:11:48,399+03:00]  [INFO ]  [main]  [c.a.a.pipeline.framework.Pipelines]  Pipelines have stopped [2023-12-03T12:11:48,399+03:00]  [INFO ]  [main]  [c.a.a.p.j.JobFrameworkModule]  Stopping job framework scheduler [2023-12-03T12:11:48,399+03:00]  [INFO ]  [main]  [org.quartz.core.QuartzScheduler]  Scheduler QuartzScheduler_$_SYS2.ZOOM.COM shutting down. [2023-12-03T12:11:48,399+03:00]  [INFO ]  [main]  [org.quartz.core.QuartzScheduler]  Scheduler QuartzScheduler_$_SYS2.ZOOM.COM paused. [2023-12-03T12:11:48,400+03:00]  [INFO ]  [main]  [org.quartz.core.QuartzScheduler]  Scheduler QuartzScheduler_$_SYS2.ZOOM.COM shutdown complete. [2023-12-03T12:11:48,400+03:00]  [INFO ]  [main]  [c.a.a.p.j.JobFrameworkModule]  Stopped job framework scheduler [2023-12-03T12:11:48,400+03:00]  [INFO ]  [main]  [c.a.common.util.event.EventBuses]  Stopped [2023-12-03T12:11:49,415+03:00]  [WARN ]  [main]  [c.a.c.u.health.HealthReporterModule]  Task will be forcibly stopped now if it has not already stopped [2023-12-03T12:11:49,415+03:00]  [INFO ]  [main]  [c.a.c.u.health.HealthReporterModule]  Stopped [2023-12-03T12:11:49,423+03:00]  [INFO ]  [main]  [c.a.common.framework.util.SimpleApp]  Stopped [events-service-api-store] [2023-12-03T12:11:49,423+03:00]  [INFO ]  [main]  [c.a.a.p.e.n.s.ElasticsearchDependencyModule]  Successfully stopped all server components
Hello What is the best way to calculate sourcetypes size trend by time  index and level ?   i tried this two options but couldn't find a way to see the trend : index=_internal source=*license_usa... See more...
Hello What is the best way to calculate sourcetypes size trend by time  index and level ?   i tried this two options but couldn't find a way to see the trend : index=_internal source=*license_usage.log* type=Usage idx=*| eval GB=b/1024/1024/1024 | stats sum(GB) by st idx   index=* | eval raw_len=len(_raw)/1024/1024/1024 | stats sum(raw_len) as totalsize count as NumberOfEvent by index sourcetype | sort -NumberOfEvent | fields - NumberOfEvent
Dec 2 08:46:55 server1 sudo[3461907]: ib12345 : TTY=pts/0 ; PWD=/home/ib12345 ; USER=root ; COMMAND=/bin/su - webadmin   I would like to extract the upi  from above ib12345 upi  ib12345 servic... See more...
Dec 2 08:46:55 server1 sudo[3461907]: ib12345 : TTY=pts/0 ; PWD=/home/ib12345 ; USER=root ; COMMAND=/bin/su - webadmin   I would like to extract the upi  from above ib12345 upi  ib12345 service_account webadmin    Sometime after is empty COMMAND=/bin/su - 
Dec 2 09:02:17 server1 sudo: ib12345 : TTY=pts/0 ; PWD=/home/ib12345 ; USER=root ; COMMAND=/bin/su -   I need to extract ib12345 from the above data .
Hey All,  I’m a splunk beginner I'm looking to create a query that to be used  as an alert, specifically to identify servers not in the _inventory – those not being monitored by Splunk. If anyone co... See more...
Hey All,  I’m a splunk beginner I'm looking to create a query that to be used  as an alert, specifically to identify servers not in the _inventory – those not being monitored by Splunk. If anyone could share insights, examples Thank You
I am trying to send Cisco SD-WAN router logs to Splunk Cloud. I have installed Universal forwarder on the log server running syslog-ng and am able to forward text-based logs. However, the FW logs are... See more...
I am trying to send Cisco SD-WAN router logs to Splunk Cloud. I have installed Universal forwarder on the log server running syslog-ng and am able to forward text-based logs. However, the FW logs are output in HSL, and  it's in netflow ver.9 format. How can I get this type of data in Splunk Cloud ?
Hi, Is it possible for someone to aid me in reformatting the given events to align with the structure present in blacklist3, organizing them into their respective blacklists or potentially amalgamat... See more...
Hi, Is it possible for someone to aid me in reformatting the given events to align with the structure present in blacklist3, organizing them into their respective blacklists or potentially amalgamating them into a unified blacklist? blacklist3 = $XmlRegex="<EventID>4688<\/EventID>.*<Data Name=('NewProcessName'|'ParentProcessName')>[C-F]:\\Program Files\\Splunk(?:UniversalForwarder)?\\bin\\(?:btool|splunkd|splunk|splunk-(?:MonitorNoHandle|admon|netmon|perfmon|powershell|regmon|winevtlog|winhostinfo|winprintmon|wmi))\.exe" Tanium  Events: C:\\Program Files \(x86\)\\Tanium\\Tanium Client\\Tools\\StdUtils\\TaniumExecWrapper\.exe| C:\\Program Files (\x86\)\\Tanium\\Tanium Client\\Patch\\tools\\TaniumExecWrapper\.exe| C:\\Program Files \(x86\)\\Tanium\\Tanium Client\\TaniumClient\.exe| C:\\Program Files \(x86\)\\Tanium\\Tanium Client\\Patch\\tools\\TaniumFileInfo\.exe| C:\\Program Files \(x86\)\\Tanium\\Tanium Client\\TaniumCX\.exe| C:\\Program Files \(x86\)\\Tanium\\Tanium Client\\python38\\TPython\.exe| C:\Program Files (x86)\Tanium\Tanium Client\Tools\Patch\7za.exe Windows defender: C:\Program Files\Windows Defender Advanced Threat Protection\MsSense.exe C:\Program Files\Windows Defender Advanced Threat Protection\SenseIR.exe C:\Program Files\Windows Defender Advanced Threat Protection\SenseCM.exe C:\ProgramData\Microsoft\Windows Defender\Platform\.*\MpCmdRun.exe C:\ProgramData\Microsoft\Windows Defender\Platform\.*\MsMpEng.exe C:\ProgramData\Microsoft\Windows Defender Advanced Threat Protection\DataCollection\.*\OpenHandleCollector.exe C:\Program Files\Windows Defender Advanced Threat Protection\SenseNdr.exe C:\ProgramData\Microsoft\Windows Defender Advanced Threat Protection\Platform\.*\SenseCM.exe C:\ProgramData\Microsoft\Windows Defender Advanced Threat Protection\Platform\.*\SenseIR.exe C:\ProgramData\Microsoft\Windows Defender Advanced Threat Protection\Platform\.*\MsSense.exe C:\Program Files\Windows Defender\MpCmdRun.exe C:\Program Files\Windows Defender\MsMpEng.exe C:\Program Files\Windows Defender Advanced Threat Protection\SenseTVM.exe C:\ProgramData\Microsoft\Windows Defender Advanced Threat Protection\Platform\10.8560.25364.1036\SenseTVM.exe Rapid7 ParentProcessName count C:\Program Files\Rapid7\Insight Agent\components\insight_agent\3.2.4.63\ir_agent.exe C:\Program Files\Rapid7\Insight Agent\components\insight_agent\4.0.0.1\ir_agent.exe C:\Program Files\Rapid7\Insight Agent\ir_agent.exe C:\\Program Files\\Rapid7\\Insight Agent\\components\\insight_agent\\.*\\get_proxy\.exe| Azure: C:\Program Files\AzureConnectedMachineAgent\ExtensionService\GC\gc_service.exe C:\Program Files\AzureConnectedMachineAgent\GCArcService\GC\gc_arc_service.exe C:\Program Files\AzureConnectedMachineAgent\GCArcService\GC\gc_service.exe C:\Program Files\AzureConnectedMachineAgent\GCArcService\GC\gc_worker.exe C:\Program Files\AzureConnectedMachineAgent\azcmagent.exe Gytpol: C:\\Program Files\\WindowsPowerShell\\Modules\\gytpol\\Client\\fw.*\\GytpolClientFW.*\.exe| forescout: ParentProcessName count C:\Program Files\ForeScout SecureConnector\SecureConnector.exe     Thanks//..
Hi, Could anyone pls help me to conver this Blacklist to xml regex ? blacklist1 = EventCode="4662" Message="Object Type:(?!\s*(groupPolicyContainer|computer|user))" Thanks..  
I have a lookup file called TA_feeds.csv with six columns labeled below with multiple rows similar to the one below. index | sourcetype | source | period | App | Input Azure | mscs:Azure:VirtualMa... See more...
I have a lookup file called TA_feeds.csv with six columns labeled below with multiple rows similar to the one below. index | sourcetype | source | period | App | Input Azure | mscs:Azure:VirtualMachines | /subscription/1111-2222-3333-4444/* | 42300 | SPlunk_Cloud | AZ_VM_Feeds AD | Azure:Signin | main_tenant | 360 | Azure_App | AD_SignIn   I use the SPL [| inputlookup TA_feeds.csv | eval earliest=0-period."s" | fields index sourcetype source earliest | format] | stats count by index sourcetype source which iterates through the the lookup, and searches the relevant indexes for the data  one row at a time and generates a count for each input type. The problem is - if a row in the lookup does not generate any data - then there is not an entry in the stats. What I need is to be able to show if a feed is zero -i.e. | search count=0 But can't figure out how to generate the zero entries    
        | eval logMsgTimestampInit = logMsgTimestamp | eval ID_SERVICE= mvappend(ID_SERVICE_1,ID_SERVICE_2) , TYPE= mvappend(TYPE1,TYPE2) | table ID_SERVICE TYPE         ID_SERVICE TYPE ... See more...
        | eval logMsgTimestampInit = logMsgTimestamp | eval ID_SERVICE= mvappend(ID_SERVICE_1,ID_SERVICE_2) , TYPE= mvappend(TYPE1,TYPE2) | table ID_SERVICE TYPE         ID_SERVICE TYPE TIME asd232 mechanic_234 2023-12-01 08:45:00 afg567 hydraulic_433         cvf455 hydraulic_787 2023-12-01 08:41:00       bjf347 mechanic_343 2023-12-01 08:40:00   Hi Dears, I have the following issue, exists some cells (like in red) that is appearing with 02 values  per cell, like the column ID_SERVICE, this is why the payload is containing 02 service id in the same message.   What I need? I need split this cells everytime it occurs , I tried to use mvexpand but unfortunately it causes a mess in the table. When I try to use mvexpand it duplicates the rows and for each value in the first colum creates  another row         ... | query_search | mvexpand ID_SERVICE | mvexpand TYPE | table ID_SERVICE TYPE TIME         ID_SERVICE TYPE TIME asd232 mechanic_234 2023-12-01 08:45:00 asd232 hydraulic_433 2023-12-01 08:45:00        afg567 mechanic_234 2023-12-01 08:45:00 afg567 hydraulic_433 2023-12-01 08:45:00        cvf455 hydraulic_787 2023-12-01 08:41:00       bjf347 mechanic_343 2023-12-01 08:40:00   Since the 01 row (in red) shares the same timestamp (TIME colum) I would like to split every value in a row and copy the same timestamp for both values and the desired output is like follows below: ID_SERVICE TYPE TIME asd232 mechanic_234 2023-12-01 08:45:00       afg567 hydraulic_433 2023-12-01 08:45:00       cvf455 hydraulic_787 2023-12-01 08:41:00       bjf347 mechanic_343 2023-12-01 08:40:00   Please, help me.
][ERROR][pub-#32738][AssociationRemoteProcessor] Exception while running association: javax.cache.CacheException: class org.apache.ignite.IgniteInterrup [2023-11-09T06:06:02,015][ERROR][pub-#19230][... See more...
][ERROR][pub-#32738][AssociationRemoteProcessor] Exception while running association: javax.cache.CacheException: class org.apache.ignite.IgniteInterrup [2023-11-09T06:06:02,015][ERROR][pub-#19230][FedPledgingFlaggingRemoteProcessor] No rejection criteria found for the specified key: CO. Hi , Can anyone guide me how to extract the highlighted text.
I have a few scheduled jobs running from an TA.  Multiple ones have | collect index=summary at the end of the SPL.  For some of them when they run I get 0 results with a warning "no results to summar... See more...
I have a few scheduled jobs running from an TA.  Multiple ones have | collect index=summary at the end of the SPL.  For some of them when they run I get 0 results with a warning "no results to summary index".  I reran the job manually and can see the results.  I can see there's a macro error in the job that did not have any results but the other job that ran has very similar SPL and works fine. When I looked at search.log the one thing that stood out is for the one that ran with results. This was in the log user context: Splunk-system-user The job that did not return results did not have "user context: Splunk-system-user" my question is what sets the user context and what overrides it (if possible) to see if this is the cause of my problems. thanks
Hello All, Do we have any method or workaround to export results of trellis layout in the visualization of dashboard to exported PDF? Any suggestions, inputs will be very helpful. Thank you Tar... See more...
Hello All, Do we have any method or workaround to export results of trellis layout in the visualization of dashboard to exported PDF? Any suggestions, inputs will be very helpful. Thank you Taruchit
I have a custom sourcetype that has the following advanced setting: Name/Value EXTRACT-app : EXTRACT-app field extraction/^(?P<date>\w+\s+\d+\s+\d+:\d+:\d+)\s+(?P<host>[^ ]+) (?P<service>[a-zA-... See more...
I have a custom sourcetype that has the following advanced setting: Name/Value EXTRACT-app : EXTRACT-app field extraction/^(?P<date>\w+\s+\d+\s+\d+:\d+:\d+)\s+(?P<host>[^ ]+) (?P<service>[a-zA-Z\-]+)_app  (?P<level>\w+)⏆(?P<controller>[^⏆]*)⏆(?P<thread>[^⏆]*)⏆((?P<flowId>[a-z0-9]*)⏆)?(?P<message>[^⏆]*)⏆(?P<exception>[^⏆]*)   I updated the regex to be slightly less restrictive about the white-space following the "_app" portion: Name/Value EXTRACT-app : EXTRACT-app field extraction/^(?P<date>\w+\s+\d+\s+\d+:\d+:\d+)\s+(?P<host>[^ ]+) (?P<service>[a-zA-Z\-]+)_app\s+(?P<level>\w+)⏆(?P<controller>[^⏆]*)⏆(?P<thread>[^⏆]*)⏆((?P<flowId>[a-z0-9]*)⏆)?(?P<message>[^⏆]*)⏆(?P<exception>[^⏆]*)   (So instead of matching on two-spaces exactly following `_app` we match on one or more white-spaces.) After saving this change, it appears Splunk cloud still uses the previous regex. (Events that include only a single space after "_app" don't get their fields extracted.) I thought perhaps I needed to wait a little while for the change to propagate, but I made the change yesterday and it still doesn't extract the fields today. Is there anything else I need to do to have the regex change take effect?
Hello,   I've the following situation: I've inside logs the ETL logs, I've already extracted some data via search fields. The log structure is the following: Fri Dec 1 16:00:59 2023 [extracted_p... See more...
Hello,   I've the following situation: I've inside logs the ETL logs, I've already extracted some data via search fields. The log structure is the following: Fri Dec 1 16:00:59 2023 [extracted_pid] [extracted_job_name] [extracted_index_operation_incremental] extracted_message Example Fri Dec 1 07:57:40 2023 [111111][talend_job_name] [100] End job Fri Dec 1 06:50:40 2023 [111111][talend_job_name] [70] Start job Fri Dec 1 06:50:39 2023 [111111][talend_job_name1] [69] End job Fri Dec 1 05:40:40 2023 [111111][talend_job_name1] [30] Start job Fri Dec 1 05:40:39 2023 [111111][talend_job_name2] [29] End job Fri Dec 1 02:50:40 2023 [111111][talend_job_name2] [1] Start job   Expected: PID          NAME                         EXEC_TIME 111111 talend_job_name 1h 7min 111111 talend_job_name1 1h 10min 111111 talend_job_name2 2h 50min   What I was requested to do is to extract a table containing the job name and the execution time, one for each pid (a job can be executed multiple times, but each time has a different PID) in order to have the data available. It is not necessary that the job starts with index 1, since all subjobs inside a job have a separated logged name (for example, the import all could contain 10 subjobs, each of one with different names) My idea of a query would be a query that involves the PID and the job name combined as primary key, considering the start time the lower extracted_index_operation_incremental for that specific PK and the end time the max value of extracted_index_operation_incremental for that PK. Any help?   Thanks for any reply.    
I am working on upgrading an instance of heavy forwarder that is running an out of support version of 7.3.3. In order to upgrade this to 9.0.1, is there another version level this must be upgraded to... See more...
I am working on upgrading an instance of heavy forwarder that is running an out of support version of 7.3.3. In order to upgrade this to 9.0.1, is there another version level this must be upgraded to prior to bringing it to version 9.0.1? I searched for upgrade path and no luck.    Thanks.
  Hello, we need to patch the OS of our Splunk Enterprise cluster distributed on 2 sites, A & B. We will start the activity on site A, which contains one Deployer Server, two SH, one MN, three Ind... See more...
  Hello, we need to patch the OS of our Splunk Enterprise cluster distributed on 2 sites, A & B. We will start the activity on site A, which contains one Deployer Server, two SH, one MN, three Indexer and three HF. Site B contains one SH, three Indexer and one HF and will be updated later. Considering that the patching of OS will require a restart of the nodes, can you please tell me Splunk Best Practice to restart the Splunk nodes? I'd start with the SH nodes then the Indexer nodes, Deployer, MN and HF. All one by one. Do I have to enable maintenance mode on each node, restart the node and disable maintenance mode, or is it sufficient to stop Splunk on each node and restart the machine? Thank you, Andrea
Hello Team, I got a weird issue, that I struggle to troubleshoot. A month ago, I realized that my WinEventLog logs were consuming too much of my licenses, so I decided to index them in the XmlWinEv... See more...
Hello Team, I got a weird issue, that I struggle to troubleshoot. A month ago, I realized that my WinEventLog logs were consuming too much of my licenses, so I decided to index them in the XmlWinEventLog format. To do this, I simply modified the inputs.conf file of my Universal Forwarder. I changed from this configuration : [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:(?!\sgroupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:(?!\sgroupPolicyContainer)" renderXml = false sourcetype = WinEventLog index = wineventlog To this configuration: [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:(?!\sgroupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:(?!\sgroupPolicyContainer)" renderXml = true sourcetype = XmlWinEventLog index = wineventlog Then I started receiving events and my license usage reduced, which made me happy. However, upon closer observation, I realized that I wasn't receiving all the events as before. Indeed, I now observe that the event frequency of the XmlWinEventLog logs is random. You can observe this on these timelines :   And in the metrics :   On the other hand, with the WinEventLog format, I have no issues:   I tried reinstalling the UF, there are no interesting errors in the splunkd.log, and I am out of ideas for troubleshooting. Thank you for your help.