All Topics

Top

All Topics

Hello  Is it possible to call this service /services/data/lookup_edit/lookup_contents to create a lookup in splunk cloud ?  thanks !
Hello, we receive somewhere between 3-5 messages in every Pod in every 1 minute. We have a situation where some of the pods go Zombie and stops writing messages.  Here's the query: index namespace... See more...
Hello, we receive somewhere between 3-5 messages in every Pod in every 1 minute. We have a situation where some of the pods go Zombie and stops writing messages.  Here's the query: index namespace pod="pod-xyz"  message="incoming events" | timechart count by pod span=1m I want help with this query to detect when the stats count in the minute time interval goes to zero.
Hi there, We have an instance running already on windows server, I am planning to move our Frozen bucket location from a local drive to a share on another server, I just have a few questions regardi... See more...
Hi there, We have an instance running already on windows server, I am planning to move our Frozen bucket location from a local drive to a share on another server, I just have a few questions regarding this. Is it as simple as editing the frozen locations on the indexes on the GUI for this and will a UNC path work ok if the permissions are set or must it be a mapped local drive? Thanks in advance!
Hello, We have a problème with Splunk Search head, the splunk service is restarted randomly when using the launch request or consultation of the dashbord. After the analysis we find kernel message... See more...
Hello, We have a problème with Splunk Search head, the splunk service is restarted randomly when using the launch request or consultation of the dashbord. After the analysis we find kernel messages at the log level. Do you have any idea about this type of message. Sep 21 10:13:16 splksh01c kernel: [4960554.728167] splunkd[26359]: segfault at 8 ip 00007f2b51845c24 sp 00007f2b279fc868 error 6 in libjemalloc.so.2[7f2b51834000+49000] Sep 21 11:10:55 splksh01c kernel: [4964014.055174] splunkd[8210]: segfault at 68 ip 00007f816fdc749f sp 00007f81443f4da0 error 4 in libjemalloc.so.2[7f816fd9b000+49000] Sep 21 11:27:01 splksh01c kernel: [4964980.155146] splunkd[29689]: segfault at 68 ip 00007fa3888013d9 sp 00007fa3571fd200 error 4 in libjemalloc.so.2[7fa3887d5000+49000] Sep 21 12:03:54 splksh01c kernel: [4967193.444921] splunkd[9885]: segfault at 68 ip 00007fb56962b3d9 sp 00007fb5339fd240 error 4 in libjemalloc.so.2[7fb5695ff000+49000] Sep 21 12:54:45 splksh01c kernel: [4970243.694017] splunkd[12378]: segfault at 68 ip 00007f629482f49f sp 00007f626affc930 error 4 in libjemalloc.so.2[7f6294803000+49000] Sep 21 13:23:16 splksh01c kernel: [4971954.661062] splunkd[10366]: segfault at 8 ip 00007f3c111c4c24 sp 00007f3bd73fd568 error 6 in libjemalloc.so.2[7f3c111b3000+49000] Sep 21 14:23:26 splksh01c kernel: [4975565.162362] splunkd[8340]: segfault at 7f713ff89f40 ip 000055981749198f sp 00007f71419fa180 error 6 in splunkd[559814297000+40f3000]        
Hi team, I would like to convert the "DNS App for Splunk" Hosting from Externally hosted to Splunkbase hosted. I contacted the Splunk support team via  splunkbase-admin@splunk.com email address a fe... See more...
Hi team, I would like to convert the "DNS App for Splunk" Hosting from Externally hosted to Splunkbase hosted. I contacted the Splunk support team via  splunkbase-admin@splunk.com email address a few days back, but haven't received a response yet. Are you able to advise how I could achieve this?  Thanks and Kind Regards,
Hello, I have made a search/query to detect the attacks of XSS the problem I have is that it also shows valid requests because there are words (cookie, script) that also appear as invalid requests ... See more...
Hello, I have made a search/query to detect the attacks of XSS the problem I have is that it also shows valid requests because there are words (cookie, script) that also appear as invalid requests ¿How could I filter so that it only shows the attacks?       search "<script>" OR "</script>" OR "&#" OR "script" OR "`" OR "cookie" OR "alert" OR "%00"| append [ datamodel Web search | where like(uri,"http:/%") OR like(uri,"*javascript*") OR like(uri,"*vbscript*") OR like(uri,"*applet*") OR like(uri," *script*") OR like(uri,"*frame*")        
Hi All, I need to build pie chart for three separate fields which should display the field name and its percentage in the same pie chart.  Eg: ial1, ial2 and ial3 are three different fields in Splu... See more...
Hi All, I need to build pie chart for three separate fields which should display the field name and its percentage in the same pie chart.  Eg: ial1, ial2 and ial3 are three different fields in Splunk. It should display in the attached format. Could someone help me on this.  
We have a Cluster master which we are using as the monitoring console as well. We have created a dashboard to collect Cluster master status. This is working fine for 'admin' role and we are getting ... See more...
We have a Cluster master which we are using as the monitoring console as well. We have created a dashboard to collect Cluster master status. This is working fine for 'admin' role and we are getting below error for the custom role which has inherited capabilities from 'user' role. Error message - Failed to fetch REST endpoint uri=https://127.0.0.1:8089/services/cluster/master/generation/master?count=0 from server https://127.0.0.1:8089. Check that the URI path provided exists in the REST API.   Query used for the dashboard-  | rest splunk_server=<Cluster master hostname> /services/cluster/master/generation/master | fields title pending_last_reason splunk_server search_factor_met replication_factor_met | eval all_data_searchable=if(isnull(pending_last_reason) OR pending_last_reason=="","All Data is Searchable","Some Data is Not Searchable") | eval search_factor_met=if(search_factor_met==1 OR search_factor_met=="1","Search Factor is Met","Search Factor is Not Met") | eval replication_factor_met=if(replication_factor_met==1 OR replication_factor_met=="1","Replication Factor is Met","Replication Factor is Not Met") | table title splunk_server search_factor_met replication_factor_met all_data_searchable | rename title as Title splunk_server as "Splunk Server" search_factor_met as "Search Factor Status" replication_factor_met as "Replication Factor Status" all_data_searchable as "All Data Searchable Status"     What capability I need to enable so that users can access this dashboard? Any help would be appreciated.
Hi, I want to get all syslog data from a large Logpoint implementation to forward to Splunk. Is there a recommended approach available to do this? Thank you O.
I have data in the following structure received for every event. Some events have just one or two sub calls and some have more sub calls. I need to calculate the sum of the total duration.     ... See more...
I have data in the following structure received for every event. Some events have just one or two sub calls and some have more sub calls. I need to calculate the sum of the total duration.       subCalls: [ [-] { [-] completionTimeMs: 69 method: GET statusCode: 200 } { [-] completionTimeMs: 77 method: GET statusCode: 200 } { [-] completionTimeMs: 956 method: POST statusCode: 200 } { [-] completionTimeMs: 201 method: PATCH statusCode: 204 } ]       The below search calculates the sum of all the values in all the events instead of every event. Please suggest on how to proceed further.       mysearch | eventstats sum(processRelevantFields.eventDetails.subCalls{}.completionTimeMs) as totalDuration | table traceId, totalDuration          
Hi Everyone, If I have 2 metrics one is cpu.1 2nd one is cpu.2 And i am using the below Query to get the metric percentages | mstats max(_value) avg(_value) min(_value) prestats=true WHERE met... See more...
Hi Everyone, If I have 2 metrics one is cpu.1 2nd one is cpu.2 And i am using the below Query to get the metric percentages | mstats max(_value) avg(_value) min(_value) prestats=true WHERE metric_name="cpu.1" AND "index"="myindex" AND [| inputlookup metrics.csv] BY host span=30min | stats Max(_value) AS Max Avg(_value) AS Avg Min(_value) AS Min BY host | eval Status=if(Avg<75,"good",if(Avg>=75 AND Avg>=90,"critical","Warning")) Now my question is I need both toatlcpu=(cpu.1+cpu.2) in one query, and it should shows max(totalcpu), avg(totalcpu), min(totalcpu) Can guide me please.ThankQ
SPL Query: index=_internal sourcetype=splunkd component=sendmodalert action=notable Output: 10-27-2021 16:31:01.962 +0200 WARN  sendmodalert - action=notable - Alert action script returned error c... See more...
SPL Query: index=_internal sourcetype=splunkd component=sendmodalert action=notable Output: 10-27-2021 16:31:01.962 +0200 WARN  sendmodalert - action=notable - Alert action script returned error code=3 10-27-2021 16:31:01.962 +0200 INFO  sendmodalert - action=notable - Alert action script completed in duration=103 ms with exit code=3 10-27-2021 16:31:01.962 +0200 ERROR  sendmodalert - action=notable STDERR - ERROR: [Errno 13] Permission denied 10-27-2021 16:31:01.858 +0200 INFO  sendmodalert - Invoking modular alert action=notable for search="Threat - ....... - Rule" sid="......" in app="SplunkEnterpriseSecuritySuite" owner="......" type="saved"  Dear all, does someone know why permissions are denied for creating notable events in Enterprise Security (6.4.1) after a correlation search is triggered? The owner of the search has an ess_admin role.
我们正在调研使用Splunk来为AWS(中国)环境做日志分析和监控,但是我们发现Splunk8.0+ 结合Splunk Add-on for AWS 5.0+是无法连接到AWS(中国)的STS终端节点的。 原因是AWS(中国)官网中的STS终端节点地址有误(AWS官方也证实了这个错误并计划修改)。  AWS(中国)给出的终端节点地址是https://sts.cn-north-1.amazon... See more...
我们正在调研使用Splunk来为AWS(中国)环境做日志分析和监控,但是我们发现Splunk8.0+ 结合Splunk Add-on for AWS 5.0+是无法连接到AWS(中国)的STS终端节点的。 原因是AWS(中国)官网中的STS终端节点地址有误(AWS官方也证实了这个错误并计划修改)。  AWS(中国)给出的终端节点地址是https://sts.cn-north-1.amazonaws.com和https://sts.cn-northwest-1.amazonaws.com 。这是错误的,正确的地址是https://sts.cn-north-1.amazonaws.com.cn和https://sts.cn-northwest-1.amazonaws.com.cn   AWS(中国)正在积极修复该问题,希望在Splunk以及Splunk Add-on for AWS也尽快修复该节点的问题,谢谢!   联系邮箱:chengsiyin@light2cloud.com    
Hi, We need to use a preprocess script for the Crowdstrike APP but don't manage to have it working. We are interested about any information on how this works. Is there any way to get any debug ... See more...
Hi, We need to use a preprocess script for the Crowdstrike APP but don't manage to have it working. We are interested about any information on how this works. Is there any way to get any debug info inside this script? Would you have a basic script to start with? Thanks in advance David
So following ui-tour Splunk documentation and other similar questions about ui-tour.conf. But i just can't get it to work. ui-tour.conf is located in /app/myapp/local - tried with /app/myapp/default... See more...
So following ui-tour Splunk documentation and other similar questions about ui-tour.conf. But i just can't get it to work. ui-tour.conf is located in /app/myapp/local - tried with /app/myapp/default, etc/system/local  - none of it does the charm. [documentation-tour] type = image label = Welcome tour imageName1 = search1.png imageCaption1 = After adding data use the Search view to run searches, design data visualizations, save reports, and create dashboards. To know more about context = myapp picture is saved in img map ( like it said in documentation). So maybe I'm getting this wrong but it should start when i run dashboard documentation on myapp, but when I run dashboard documentation it doesn't run. So my question do i need to change something dashboards for it to run because I didn't get that feeling reading documentation and other questions. Thank you in advance.
Hello Splunkers!! One a everyday basis one of my Splunk instances goes down and i am getting below error. Please suggest me for the permanent fix and workaround for the below error.   splunkd 7... See more...
Hello Splunkers!! One a everyday basis one of my Splunk instances goes down and i am getting below error. Please suggest me for the permanent fix and workaround for the below error.   splunkd 7081 was not running. Stopping splunk helpers... Done. Stopped helpers. Removing stale pid file... done. splunkd is not running.
ERROR TcpInputProc - Message rejected. Received unexpected message of size=369295616 bytes from src=xxxx:xxxx in streaming mode. Maximum message size allowed=67108864. (::) Possible invalid source se... See more...
ERROR TcpInputProc - Message rejected. Received unexpected message of size=369295616 bytes from src=xxxx:xxxx in streaming mode. Maximum message size allowed=67108864. (::) Possible invalid source sending data to splunktcp port or valid source sending unsupported payload.
I configured the javaagent for the springboot application with the exact steps in the Getting Started wizard, however, when I start the spring boot application, I am getting below error.  The agent i... See more...
I configured the javaagent for the springboot application with the exact steps in the Getting Started wizard, however, when I start the spring boot application, I am getting below error.  The agent is not getting started and the application is not getting detected by appdynamics.  Can you please help resolve.   Thanks. Java 9+ detected, booting with Java9Util enabled. Full Agent Registration Info Resolver using selfService [true] Full Agent Registration Info Resolver using selfService [true] Full Agent Registration Info Resolver using ephemeral node setting [false] Full Agent Registration Info Resolver using application name [Create User] Full Agent Registration Info Resolver using tier name [app-tier] Full Agent Registration Info Resolver using node name [laptop] Install Directory resolved to[C:\dev\code\spring-boot-rest-api-tutorial-master\appagent] getBootstrapResource not available on ClassLoader Class with name [com.ibm.lang.management.internal.ExtendedOperatingSystemMXBeanImpl] is not available in classpath, so will ignore export access. java.lang.ClassNotFoundException: Unable to load class io.opentelemetry.sdk.resources.ResourceProvider at com.singularity.ee.agent.appagent.kernel.classloader.Post19AgentClassLoader.findClass(Post19AgentClassLoader.java:73) at com.singularity.ee.agent.appagent.kernel.classloader.AgentClassLoader.loadClassInternal(AgentClassLoader.java:422) at com.singularity.ee.agent.appagent.kernel.classloader.Post17AgentClassLoader.loadClassParentLast(Post17AgentClassLoader.java:69) at com.singularity.ee.agent.appagent.kernel.classloader.AgentClassLoader.loadClass(AgentClassLoader.java:320) at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522) at java.base/java.lang.Class.forName0(Native Method) at java.base/java.lang.Class.forName(Class.java:398) at com.singularity.ee.agent.appagent.AgentEntryPoint.createJava9Module(AgentEntryPoint.java:796) at com.singularity.ee.agent.appagent.AgentEntryPoint.premain(AgentEntryPoint.java:632) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at java.instrument/sun.instrument.InstrumentationImpl.loadClassAndStartAgent(InstrumentationImpl.java:513) at java.instrument/sun.instrument.InstrumentationImpl.loadClassAndCallPremain(InstrumentationImpl.java:525) java.lang.reflect.InvocationTargetException at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.singularity.ee.agent.appagent.AgentEntryPoint$1.run(AgentEntryPoint.java:649) Caused by: java.lang.IllegalAccessError: class org.apache.logging.log4j.core.LoggerContext (in module com.appdynamics.appagent) cannot access class java.beans.PropertyChangeEvent (in module java.desktop) because module com.appdynamics.appagent does not read module java.desktop at com.appdynamics.appagent/org.apache.logging.log4j.core.LoggerContext.updateLoggers(LoggerContext.java:657) at com.appdynamics.appagent/org.apache.logging.log4j.core.LoggerContext.updateLoggers(LoggerContext.java:644) at com.appdynamics.appagent/org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:550) at com.appdynamics.appagent/org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:620) at com.appdynamics.appagent/org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:637) at com.appdynamics.appagent/org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:231) at com.appdynamics.appagent/org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:243) at com.appdynamics.appagent/org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45) at com.appdynamics.appagent/org.apache.logging.log4j.LogManager.getContext(LogManager.java:174) at com.appdynamics.appagent/org.apache.logging.log4j.LogManager.getLogger(LogManager.java:648) at com.appdynamics.appagent/com.singularity.ee.agent.util.log4j.ADLoggerFactory.getLogger(ADLoggerFactory.java:61) at com.appdynamics.appagent/com.singularity.ee.agent.util.bounded.collections.BoundsEnforcer.<clinit>(BoundsEnforcer.java:53) at com.appdynamics.appagent/com.singularity.ee.agent.util.bounded.collections.BoundsEnforcer$Builder.build(BoundsEnforcer.java:388) at com.appdynamics.appagent/com.singularity.ee.agent.util.bounded.collections.BoundedConcurrentReferenceHashMap.<init>(BoundedConcurrentReferenceHashMap.java:73) at com.appdynamics.appagent/com.singularity.ee.agent.util.bounded.collections.BoundedConcurrentReferenceHashMapBuilder.build(BoundedConcurrentReferenceHashMapBuilder.java:114) at com.appdynamics.appagent/com.singularity.ee.agent.util.reflect.ReflectionUtility.<clinit>(ReflectionUtility.java:154) at com.appdynamics.appagent/com.singularity.ee.agent.appagent.kernel.JavaAgent.setLog4j2LoaderUtilDisabled(JavaAgent.java:332) at com.appdynamics.appagent/com.singularity.ee.agent.appagent.kernel.JavaAgent.setupLog4J2(JavaAgent.java:607) at com.appdynamics.appagent/com.singularity.ee.agent.appagent.kernel.JavaAgent.initialize(JavaAgent.java:359) at com.appdynamics.appagent/com.singularity.ee.agent.appagent.kernel.JavaAgent.initialize(JavaAgent.java:346) ... 5 more Process finished with exit code -1
We found that the searchable events for our  wineventlog only goes back about 4 months but the searchable retention is set to 2 years 364 days (which is a total of 3 years). Splunk has said that the ... See more...
We found that the searchable events for our  wineventlog only goes back about 4 months but the searchable retention is set to 2 years 364 days (which is a total of 3 years). Splunk has said that the most likely scenario is that someone has changed the retention period recently.  We would like to find out who has modified the searchable retention period. I have looked in the audit logs but that also only goes back about 5 months and have not found anything useful. I have also googled and have not found any solutions. Would appreciate any help. Thank you.
We recently upgraded Splunk from 7.2.x to 8.1.3. We also upgraded our existing Fortinet apps/add-ons as follows: Fortinet Fortigate Add-on for Splunk: from 1.6.0 to 1.6.3 Fortinet FortiGate App... See more...
We recently upgraded Splunk from 7.2.x to 8.1.3. We also upgraded our existing Fortinet apps/add-ons as follows: Fortinet Fortigate Add-on for Splunk: from 1.6.0 to 1.6.3 Fortinet FortiGate App for Splunk: from 1.4 to 1.6.0 When we attempt to view the App dashboards, however they are now not displaying any data on any of the panels. Searching for terms 'fortigate_traffic', 'fortigate_system', 'fortigate_auth' etc also show no results. If i go out into the Search & Reporting app and do a basic search on all data coming in from our Fortigate using either "sourcetype=fgt_log" or "index=fgt_logs", I can see that we are ingesting the data from our Fortigate ok, its just not displaying in the app. Any suggestions appreciated.