All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, I want to analyze several events and the fields in them. Origianally, I use case() to capture one field(Causse) in A_EVENT, and 2 fields(Type, Scenario) in B_event.     (index="idx_mes... See more...
Hi all, I want to analyze several events and the fields in them. Origianally, I use case() to capture one field(Causse) in A_EVENT, and 2 fields(Type, Scenario) in B_event.     (index="idx_message" Name="A_EVENT" OR Name="B_Event") | rename "Data.Cause" as A_cause `comment("belong to A_Event")` | rename "Data.Type" as B_type `comment("belong to B_Event")` | rename "Data.Scenario" as B_scenario `comment("belong to B_Event")` | eval Scenario_name=case( Name="A_EVENT" AND A_cause=0, "A Cause: X reason", Name="A_EVENT" AND A_cause=1, "A Cause: Y reason", Name="B_Event" AND B_type=0, "B Type: a category", Name="B_Event" AND B_type=1, "B Type: b category", Name="B_Event" AND B_scenario=0, "B Scenario: description 1", Name="B_Event" AND B_scenario=1, "B Scenario: description 2", true(), null) | where isnotnull(Scenario_name) | chart limit=0 count by Scenario_name     However, while I check the output, the output will be Scenario_name count A Cause: X reason 1 A Cause: Y reason 2 B Type: a category 3 B Type: b category 4   The scenario "B Scenario: description 1" and "B Scenario: description 2" are missing. I found the reason comes from "B Scenario" and "B Type" is used to verdict the same event, if I use case(), I am unable to get any "B Scenario" because all the events will be verdicted as "B Type" already. Is there any way to generate such output? Scenario_name count A Cause: X reason 1 A Cause: Y reason 2 B Type: a category 3 B Type: b category 4 B Scenario: description 1 2 B Scenario: description 2 5   Thanks.
error installing apps despite entering correct username and password
Hi all, There is a panel contain a table as below. I would like to color Scenario_1  fields in red because the both Success or Failure field in this row contains values = 0.  Is there any way to... See more...
Hi all, There is a panel contain a table as below. I would like to color Scenario_1  fields in red because the both Success or Failure field in this row contains values = 0.  Is there any way to implement such functionality ? Event name Success Failure Scenario_1 0 0 Scenario_2 4 3 Scenario_3 0 1   Thank you very much.
I will to make a back up of all the logs of splunk, so we want to be sure that all the host save the logs for a specific time, the windows will be around 28 hrs, more in specific the hosts of Windos ... See more...
I will to make a back up of all the logs of splunk, so we want to be sure that all the host save the logs for a specific time, the windows will be around 28 hrs, more in specific the hosts of Windos and Unix servers
I'm trying to validate if we have a large amount of data duplication. Whenever I run the dedup _raw command the number of results is typically halved but based on the onprem set up I don't see how ... See more...
I'm trying to validate if we have a large amount of data duplication. Whenever I run the dedup _raw command the number of results is typically halved but based on the onprem set up I don't see how the data could be ingested twice (our typical data flow syslog to a HF that is then ingested by an indexer cluster). If I run index=sample|transaction _raw I only get a small number of results where the raw results are combined.  How can I confirm how big my data duplication problem might be?
Hello, I am using python to test sending events to splunk via hec, but I am getting the following error. this error is more of a python issue than splunk,  I was wondering if any one had similar issu... See more...
Hello, I am using python to test sending events to splunk via hec, but I am getting the following error. this error is more of a python issue than splunk,  I was wondering if any one had similar issue or have a suggestion.   Thank you.   user_data_str looks like below { "GivenName": "MyName", "sn": "MyLastN", "UserPrincipalName": "MyName@client.com", "sAMAccountName": "mysama", "enabled": "[]", "co": "United States", "sourcetype": "my_hec_test", "time": 9999999999.8921528 } Header looks like below { 'Authorization': 'Splunk abcdefgh-1234-ijkl-5678-mnopqrstu123456' }     user_data_str include 3 users similar to above example   >>> print(range(len(user_data_str))) range(0, 3) >>> type(user_data_str) <class 'list'> >>> type(header) <class 'dict'> >>> >>> requests.post('https://hec.splunkcloud.com/services/collector/event', headers = header, data = user_data_str, verify=False) Traceback (most recent call last): ... ... ... ... ValueError: too many values to unpack (expected 2)    
I'm trying to send logs from one heavy forwarder to another over the port 9998. It connects but for some reason, it closes right after 04-26-2023 16:33:48.141 -0400 INFO AutoLoadBalancedConnection... See more...
I'm trying to send logs from one heavy forwarder to another over the port 9998. It connects but for some reason, it closes right after 04-26-2023 16:33:48.141 -0400 INFO AutoLoadBalancedConnectionStrategy [18998 TcpOutEloop] - Connected to idx=<ip>:9998:0, pset=0, reuse=0. autoBatch=1 04-26-2023 16:33:48.238 -0400 INFO AutoLoadBalancedConnectionStrategy [18998 TcpOutEloop] - Connection to <ip>:9998 closed. context=write sock_error=104. ssl_error=error:00000000:lib(0):func(0):reason(0) Is this really ssl related issue? But I have this in my outputs.conf though sslCertPath = $SPLUNK_HOME/etc/auth/server.pem sslRootCAPath = $SPLUNK_HOME/etc/auth/ca.pem sslPassword = password Can anyone please help me to get around this?
I am new to Splunk and facing an issue in separating out the two columns of the query. I tried with the below query and found the results as shown below in table1 ...| append [search index="pd" "su... See more...
I am new to Splunk and facing an issue in separating out the two columns of the query. I tried with the below query and found the results as shown below in table1 ...| append [search index="pd" "successful" "notif/output/" | stats count by _raw |fields count | rename _raw as Dtransfer] | append [search index="pd" "SBID=nr" "DM" "PAM=sende" "notif/archive/" | stats count by _raw |fields count | rename _raw as DMCopy]   How do I achieve the expected result shown in Table 2? I need to display two separate columns DtransferCount and DMCopyCount    
I have a large single site index cluster running 9.0.4 .  Inter server communication is secured with third party certificates. The hosts are all older Linux boxes. I need to move to RHEL8  and turn  ... See more...
I have a large single site index cluster running 9.0.4 .  Inter server communication is secured with third party certificates. The hosts are all older Linux boxes. I need to move to RHEL8  and turn  on FIPS.  Am I going to have any interoperability issues communication in the Cluster with some FIPS and some Non FIPS systems with certificates created without FIPS? 
We are trying to figure out which Java installations we need to upgrade, and one thing I'm not clear about is whether DB Connect uses its own bundled Java or the system's Java installation.
Hello! We have a database that can be consulted by 4 different connection nodes. To generate high availability in the event ingestion of this database, I intend to create in the db-connect app 4 en... See more...
Hello! We have a database that can be consulted by 4 different connection nodes. To generate high availability in the event ingestion of this database, I intend to create in the db-connect app 4 entries, one pointing to each connection node. My query is if, when performing the Splunk ingestion, each input task will be able to understand that the events of the database table have already been indexed by another input and ignore it so as not to duplicate the information? Greetings and thanks!
Hi All I need a way to clear down the config of an installed UF and then point it to the distribution server and pick up new/refreshed configs Apart from uninstalling, removing files and re-inst... See more...
Hi All I need a way to clear down the config of an installed UF and then point it to the distribution server and pick up new/refreshed configs Apart from uninstalling, removing files and re-installing the UF is there a clever way of doing this (background, we've a number of hosts reporting in as the wrong hostname, and want to sort them out) Cheers   Al
I would like to create a time filter in my dashboard based on Month reports, something like below: But each month time frame should be customized on this way: January -> Between 15/12/2022 (... See more...
I would like to create a time filter in my dashboard based on Month reports, something like below: But each month time frame should be customized on this way: January -> Between 15/12/2022 (previous year) and 05/01/2023 (current year) February -> Between 15/01/2023 and 05/02/2023 March -> Between 15/02/2023 and 05/03/2023 .... This should be used to filter the data presented on the dashboard based on the month selected.
I'm trying to create a search using Qualys vulnerability scan data to find hosts that failed to be logged into that were success the previous week.  I've been trying to use this similar example  a... See more...
I'm trying to create a search using Qualys vulnerability scan data to find hosts that failed to be logged into that were success the previous week.  I've been trying to use this similar example  as a template but it doesn't quite get what I'm looking for.  For reference, the Qualys data does not have fields that say something regarding successful or failed authentication attempts - rather they use QIDs. QID  105015 - Windows Failed 105053 - Unix Failed  38307 - Unix Successful 70053- Windows Successful     
Hello. I added a new peer through the Search peers settings But I realized that I do not need this server. I try to delete via the web, it doesn’t work, I delete it by editing the file, it also doesn... See more...
Hello. I added a new peer through the Search peers settings But I realized that I do not need this server. I try to delete via the web, it doesn’t work, I delete it by editing the file, it also doesn’t work, restarting the distsearch.conf file returns where this server is. How to remove this server, I've been trying to find a solution for a month now I am editing distsearch.conf file I delete the contents of servers 1.1.1.3, 1.1.1.4 [distributed search] disabled = 0 servers = https://1.1.1.1:8089,https://1.1.1.2:8089,https://1.1.1.3:8089,https://1.1.1.4:8089 Restarting splunk everything returns. I'm trying to delete via web. also does not apply. have access to the network. everything works correctly. But **beep** it I can't remove it from the file HOW DO I RETURN THE OLD VERSION OF THE FILE distsearch.conf
Hi everyone! Currently we are trying to instrument the Java agent of AppDynamics in a Elasticsearch running on Kubernetes. We had a few access denied errors when the Appdynamics agent tried to moni... See more...
Hi everyone! Currently we are trying to instrument the Java agent of AppDynamics in a Elasticsearch running on Kubernetes. We had a few access denied errors when the Appdynamics agent tried to monitor Elasticsearch, but we resolved most with the following policy: grant codeBase "file:/opt/appdynamics/-" { permission java.security.AllPermission; permission java.lang.reflect.ReflectPermission "suppressAccessChecks"; permission java.util.PropertyPermission "*", "read,write"; permission java.lang.RuntimePermission "*"; permission java.lang.management.ManagementPermission "monitor"; permission java.lang.reflect.ReflectPermission "suppressAccessChecks"; }; grant { permission "java.security.SecurityPermission" "*"; permission "java.lang.RuntimePermission" "*"; permission java.io.FilePermission "<<ALL FILES>>","read,write,delete"; permission java.net.SocketPermission "*","accept,connect,resolve,listen"; permission java.util.PropertyPermission "*", "read,write"; permission "java.lang.management.ManagementPermission" "monitor"; permission "java.lang.reflect.ReflectPermission" "*"; permission "javax.management.MBeanServerPermission" "*"; permission "javax.management.MBeanPermission" "*","*"; permission "javax.management.MBeanTrustPermission" "*"; permission java.net.NetPermission "*"; }; However, at times we have the following access denied error that we are unable to resolve: access: access denied ("java.lang.RuntimePermission" "getClassLoader") java.lang.Exception: Stack trace at java.base/java.lang.Thread.dumpStack(Thread.java:1379) at java.base/java.security.AccessControlContext.checkPermission(AccessControlContext.java:462) at java.base/java.security.AccessController.checkPermission(AccessController.java:1036) at java.base/java.lang.SecurityManager.checkPermission(SecurityManager.java:408) at java.base/java.lang.ClassLoader.checkClassLoaderPermission(ClassLoader.java:2058) at java.base/java.lang.Class.getClassLoader(Class.java:836) at com.appdynamics.appagent/com.singularity.ee.agent.appagent.services.bciengine.transformation.AnonymousClassDefTransformer.classDefTrap(AnonymousClassDefTransformer.java:61) at com.singularity.ee.agent.appagent.entrypoint.bciengine.AnonymousClassDefTransformerBoot.classDefTrap(AnonymousClassDefTransformerBoot.java:31) at java.base/jdk.internal.misc.Unsafe.defineAnonymousClass(Unsafe.java:1225) at java.base/java.lang.invoke.InnerClassLambdaMetafactory.spinInnerClass(InnerClassLambdaMetafactory.java:321) at java.base/java.lang.invoke.InnerClassLambdaMetafactory.buildCallSite(InnerClassLambdaMetafactory.java:189) at java.base/java.lang.invoke.LambdaMetafactory.metafactory(LambdaMetafactory.java:329) at java.base/java.lang.invoke.BootstrapMethodInvoker.invoke(BootstrapMethodInvoker.java:127) at java.base/java.lang.invoke.CallSite.makeSite(CallSite.java:307) at java.base/java.lang.invoke.MethodHandleNatives.linkCallSiteImpl(MethodHandleNatives.java:259) at java.base/java.lang.invoke.MethodHandleNatives.linkCallSite(MethodHandleNatives.java:249) at org.elasticsearch.painless.ScriptClassInfo.<init>(ScriptClassInfo.java:75) at org.elasticsearch.painless.Compiler.compile(Compiler.java:210) at org.elasticsearch.painless.PainlessScriptEngine$5.run(PainlessScriptEngine.java:420) at org.elasticsearch.painless.PainlessScriptEngine$5.run(PainlessScriptEngine.java:416) at java.base/java.security.AccessController.doPrivileged(AccessController.java:391) at org.elasticsearch.painless.PainlessScriptEngine.compile(PainlessScriptEngine.java:416) at org.elasticsearch.painless.PainlessScriptEngine.compile(PainlessScriptEngine.java:167) at org.elasticsearch.script.ScriptService.compile(ScriptService.java:363) at org.elasticsearch.ingest.common.ScriptProcessor$Factory.create(ScriptProcessor.java:148) at org.elasticsearch.ingest.common.ScriptProcessor$Factory.create(ScriptProcessor.java:90) at org.elasticsearch.ingest.ConfigurationUtils.readProcessor(ConfigurationUtils.java:402) at org.elasticsearch.ingest.ConfigurationUtils.readProcessor(ConfigurationUtils.java:372) at org.elasticsearch.ingest.ConfigurationUtils.readProcessorConfigs(ConfigurationUtils.java:316) at org.elasticsearch.ingest.Pipeline.create(Pipeline.java:73) at org.elasticsearch.ingest.IngestService.innerUpdatePipelines(IngestService.java:515) at org.elasticsearch.ingest.IngestService.applyClusterState(IngestService.java:259) at org.elasticsearch.cluster.service.ClusterApplierService.lambda$callClusterStateAppliers$6(ClusterApplierService.java:484) at java.base/java.lang.Iterable.forEach(Iterable.java:75) at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:481) at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:468) at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:419) at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:163) at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:830) access: access allowed ("java.security.SecurityPermission" "getPolicy") access: domain that failed ProtectionDomain null null <no principals> java.security.Permissions@5da5ecc6 ( ) access: access denied ("java.lang.RuntimePermission" "getClassLoader") java.lang.Exception: Stack trace at java.base/java.lang.Thread.dumpStack(Thread.java:1379) at java.base/java.security.AccessControlContext.checkPermission(AccessControlContext.java:462) at java.base/java.security.AccessController.checkPermission(AccessController.java:1036) at java.base/java.lang.SecurityManager.checkPermission(SecurityManager.java:408) at java.base/java.lang.ClassLoader.checkClassLoaderPermission(ClassLoader.java:2058) at java.base/java.lang.Class.getClassLoader(Class.java:836) at com.appdynamics.appagent/com.singularity.ee.agent.appagent.services.bciengine.transformation.AnonymousClassDefTransformer.classDefTrap(AnonymousClassDefTransformer.java:61) at com.singularity.ee.agent.appagent.entrypoint.bciengine.AnonymousClassDefTransformerBoot.classDefTrap(AnonymousClassDefTransformerBoot.java:31) at java.base/jdk.internal.misc.Unsafe.defineAnonymousClass(Unsafe.java:1225) at java.base/java.lang.invoke.InnerClassLambdaMetafactory.spinInnerClass(InnerClassLambdaMetafactory.java:321) at java.base/java.lang.invoke.InnerClassLambdaMetafactory.buildCallSite(InnerClassLambdaMetafactory.java:189) at java.base/java.lang.invoke.LambdaMetafactory.metafactory(LambdaMetafactory.java:329) at java.base/java.lang.invoke.BootstrapMethodInvoker.invoke(BootstrapMethodInvoker.java:127) at java.base/java.lang.invoke.CallSite.makeSite(CallSite.java:307) at java.base/java.lang.invoke.MethodHandleNatives.linkCallSiteImpl(MethodHandleNatives.java:259) at java.base/java.lang.invoke.MethodHandleNatives.linkCallSite(MethodHandleNatives.java:249) at org.elasticsearch.painless.ScriptClassInfo.<init>(ScriptClassInfo.java:86) at org.elasticsearch.painless.Compiler.compile(Compiler.java:210) at org.elasticsearch.painless.PainlessScriptEngine$5.run(PainlessScriptEngine.java:420) at org.elasticsearch.painless.PainlessScriptEngine$5.run(PainlessScriptEngine.java:416) at java.base/java.security.AccessController.doPrivileged(AccessController.java:391) at org.elasticsearch.painless.PainlessScriptEngine.compile(PainlessScriptEngine.java:416) at org.elasticsearch.painless.PainlessScriptEngine.compile(PainlessScriptEngine.java:167) at org.elasticsearch.script.ScriptService.compile(ScriptService.java:363) at org.elasticsearch.ingest.common.ScriptProcessor$Factory.create(ScriptProcessor.java:148) at org.elasticsearch.ingest.common.ScriptProcessor$Factory.create(ScriptProcessor.java:90) at org.elasticsearch.ingest.ConfigurationUtils.readProcessor(ConfigurationUtils.java:402) at org.elasticsearch.ingest.ConfigurationUtils.readProcessor(ConfigurationUtils.java:372) at org.elasticsearch.ingest.ConfigurationUtils.readProcessorConfigs(ConfigurationUtils.java:316) at org.elasticsearch.ingest.Pipeline.create(Pipeline.java:73) at org.elasticsearch.ingest.IngestService.innerUpdatePipelines(IngestService.java:515) at org.elasticsearch.ingest.IngestService.applyClusterState(IngestService.java:259) at org.elasticsearch.cluster.service.ClusterApplierService.lambda$callClusterStateAppliers$6(ClusterApplierService.java:484) at java.base/java.lang.Iterable.forEach(Iterable.java:75) at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:481) at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:468) at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:419) at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:163) at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:830) access: access allowed ("java.security.SecurityPermission" "getPolicy") access: domain that failed ProtectionDomain null null <no principals> java.security.Permissions@5da5ecc6 ( ) access: access denied ("java.lang.RuntimePermission" "getClassLoader") java.lang.Exception: Stack trace at java.base/java.lang.Thread.dumpStack(Thread.java:1379) at java.base/java.security.AccessControlContext.checkPermission(AccessControlContext.java:462) at java.base/java.security.AccessController.checkPermission(AccessController.java:1036) at java.base/java.lang.SecurityManager.checkPermission(SecurityManager.java:408) at java.base/java.lang.ClassLoader.checkClassLoaderPermission(ClassLoader.java:2058) at java.base/java.lang.Class.getClassLoader(Class.java:836) at com.appdynamics.appagent/com.singularity.ee.agent.appagent.services.bciengine.transformation.AnonymousClassDefTransformer.classDefTrap(AnonymousClassDefTransformer.java:61) at com.singularity.ee.agent.appagent.entrypoint.bciengine.AnonymousClassDefTransformerBoot.classDefTrap(AnonymousClassDefTransformerBoot.java:31) at java.base/jdk.internal.misc.Unsafe.defineAnonymousClass(Unsafe.java:1225) at java.base/java.lang.invoke.InnerClassLambdaMetafactory.spinInnerClass(InnerClassLambdaMetafactory.java:321) at java.base/java.lang.invoke.InnerClassLambdaMetafactory.buildCallSite(InnerClassLambdaMetafactory.java:189) at java.base/java.lang.invoke.LambdaMetafactory.metafactory(LambdaMetafactory.java:329) at java.base/java.lang.invoke.BootstrapMethodInvoker.invoke(BootstrapMethodInvoker.java:127) at java.base/java.lang.invoke.CallSite.makeSite(CallSite.java:307) at java.base/java.lang.invoke.MethodHandleNatives.linkCallSiteImpl(MethodHandleNatives.java:259) at java.base/java.lang.invoke.MethodHandleNatives.linkCallSite(MethodHandleNatives.java:249) at org.elasticsearch.painless.ScriptClassInfo.methodArgument(ScriptClassInfo.java:180) at org.elasticsearch.painless.ScriptClassInfo.<init>(ScriptClassInfo.java:99) at org.elasticsearch.painless.Compiler.compile(Compiler.java:210) at org.elasticsearch.painless.PainlessScriptEngine$5.run(PainlessScriptEngine.java:420) at org.elasticsearch.painless.PainlessScriptEngine$5.run(PainlessScriptEngine.java:416) at java.base/java.security.AccessController.doPrivileged(AccessController.java:391) at org.elasticsearch.painless.PainlessScriptEngine.compile(PainlessScriptEngine.java:416) at org.elasticsearch.painless.PainlessScriptEngine.compile(PainlessScriptEngine.java:167) at org.elasticsearch.script.ScriptService.compile(ScriptService.java:363) at org.elasticsearch.ingest.common.ScriptProcessor$Factory.create(ScriptProcessor.java:148) at org.elasticsearch.ingest.common.ScriptProcessor$Factory.create(ScriptProcessor.java:90) at org.elasticsearch.ingest.ConfigurationUtils.readProcessor(ConfigurationUtils.java:402) at org.elasticsearch.ingest.ConfigurationUtils.readProcessor(ConfigurationUtils.java:372) at org.elasticsearch.ingest.ConfigurationUtils.readProcessorConfigs(ConfigurationUtils.java:316) at org.elasticsearch.ingest.Pipeline.create(Pipeline.java:73) at org.elasticsearch.ingest.IngestService.innerUpdatePipelines(IngestService.java:515) at org.elasticsearch.ingest.IngestService.applyClusterState(IngestService.java:259) at org.elasticsearch.cluster.service.ClusterApplierService.lambda$callClusterStateAppliers$6(ClusterApplierService.java:484) at java.base/java.lang.Iterable.forEach(Iterable.java:75) at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:481) at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:468) at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:419) at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:163) at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:830) access: access allowed ("java.security.SecurityPermission" "getPolicy") access: domain that failed ProtectionDomain null null <no principals> java.security.Permissions@5da5ecc6 ( ) When we access the AppDynamics dashboard, we see that Elasticsearch appears online, but the only metrics captured are CPU and memory usage. Has anyone experienced this problem or instrumented AppDynamics another way, or can you help solve and try to understand this access denied error? PS: - The x-pack-security is currently enabled; - The AppDynamics Java agent is stored in a volume attached for each Elasticsearch node with read and write access; - We tried to give access to all this access denied error; - The java policy we created were applied successfully; - There is no AppDynamics logs in it's workspace about this access denied error;
Hi there Versions: splunk enterprise 9.0.4.1, splunk db connect 3.12.2  We are trying to secure splunk enterprise 9 with certificates. Everything runs almost fine (web, forwarders indexers). But ... See more...
Hi there Versions: splunk enterprise 9.0.4.1, splunk db connect 3.12.2  We are trying to secure splunk enterprise 9 with certificates. Everything runs almost fine (web, forwarders indexers). But splunk db does not come up, when requireClientCert=true in server.conf. Logs says "peer did not return a certificate". Details below. Message in the UI: "('Unable to communicate with Splunkd. If you enable requireClientCert please make sure certs folder contains privkey.pem and cert.pem files. Also make sure cert.pem has been signed by the root CA used by Splunkd.',)" We provided the files (used the names like above) in ../splunk/etc/apps/splunk_app_db_connect/certs. It didn't make a change. Has anyone got this configuration up and running? Kind Regards Elmar Log details: 04-26-2023 15:12:48.732 +0200 INFO ExecProcessor [3581394 ExecProcessor] - message from "/opt/splunk/splunk/etc/apps/splunk_app_db_connect/bin/dbxquery.sh" action=start_dbxquery_server, configFile=/opt/splunk/splunk/etc/apps/splunk_app_db_connect/config/dbxquery_server.yml 04-26-2023 15:12:48.732 +0200 INFO TailReader [3581482 tailreader0] - Batch input finished reading file='/opt/splunk/splunk/var/spool/splunk/tracker.log' 04-26-2023 15:12:48.895 +0200 INFO ExecProcessor [3581394 ExecProcessor] - message from "/opt/splunk/splunk/etc/apps/splunk_app_db_connect/bin/server.sh" action=start_task_server, configFile=/opt/splunk/splunk/etc/apps/splunk_app_db_connect/config/dbx_task_server.yml 04-26-2023 15:12:49.372 +0200 WARN SSLCommon [3581489 HttpDedicatedIoThread-0] - Received fatal SSL3 alert. ssl_state='error', alert_description='handshake failure'. 04-26-2023 15:12:49.373 +0200 WARN HttpListener [3581489 HttpDedicatedIoThread-0] - Socket error from 127.0.0.1:33298 while idling: error:140890C7:SSL routines:ssl3_get_client_certificate:peer did not return a certificate
Hello, I have a dropdown which runs a search query that returns a subset of potential results. I want to create an "ALL" value option which only applies the results of the subset from that search. ... See more...
Hello, I have a dropdown which runs a search query that returns a subset of potential results. I want to create an "ALL" value option which only applies the results of the subset from that search.   <input type="dropdown" token="mytoken"> <label>My Token</label> <choice value="*">ALL</choice> <initialValue>*</initialValue> <fieldForLabel>resultName</fieldForLabel> <fieldForValue>resultValue</fieldForValue> <search> <query> index="AnIndex" type="FilterType" </query> </search> </input>   This token is used in a panel as follows:   <panel> <single> <title>Warnings</title> <search> <query> index="AnIndex" myToken=$mytoken$ level="warn" | stats count(message) </query> </search> ... </panel>     As it currently is, the default value for the dropdown resolves to the wildcard `*`. I want the default value to only be the subset that my query returns.  Any help appreciated, thank you. 
I am trying to figure out how to get the on-poll action to run outside of a playbook  to be scheduled in the asset settings under the "ingest setting" tab -- in SOAR on the app page, the ingest setti... See more...
I am trying to figure out how to get the on-poll action to run outside of a playbook  to be scheduled in the asset settings under the "ingest setting" tab -- in SOAR on the app page, the ingest setting tab isn't showing up even though I've written an on_poll action within my code. I can run the on_poll action from the app page, but I'm not sure how to run it on a schedule.
Hello There, I've the data in below tabular format but it's not showing correctly over the visualization. visualization should show either active or inactive based on the above table but ... See more...
Hello There, I've the data in below tabular format but it's not showing correctly over the visualization. visualization should show either active or inactive based on the above table but it's showing 0. @ITWhisperer