All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have a sample data pushed to splunk as below: Help me with splunk query where I want only unique server names with final status as second column. compare both horizantally & vertically for each ser... See more...
I have a sample data pushed to splunk as below: Help me with splunk query where I want only unique server names with final status as second column. compare both horizantally & vertically for each server second column status, if any of the second column value is No for that server then consider No as final status for that server, if all the second column values are Yes for a Server, then consider that server final status as Yes. sample.csv: ServerName,Status,Department,Company,Location Server1,Yes,Government,DRDO,Bangalore Server1,No,Government,DRDO,Bangalore Server1,Yes,Government,DRDO,Bangalore Server2,No,Private,TCS,Chennai Server2,No,Private,TCS,Chennai Server3,Yes,Private,Infosys,Bangalore Server3,Yes,Private,Infosys,Bangalore Server4,Yes,Private,Tech Mahindra,Pune Server5,No,Government,IncomeTax India, Mumbai Server6,Yes,Private,Microsoft,Hyderabad Server6,No,Private,Microsoft,Hyderabad Server6,Yes,Private,Microsoft,Hyderabad Server6,No,Private,Microsoft,Hyderabad Server7,Yes,Government,GST Council,Delhi Server7,Yes,Government,GST Council,Delhi Server7,Yes,Government,GST Council,Delhi Server7,Yes,Government,GST Council,Delhi Server8,No,Private,Apple,Bangalore Server8,No,Private,Apple,Bangalore Server8,No,Private,Apple,Bangalore Server8,No,Private,Apple,Bangalore Output should looks similar to below: ServerName,FinalStatus Server1,No Server2,No Server3,Yes Server4,Yes Server5,No Server6,No Server7,Yes Server8,No The Status count of any server should show based on search of any of the fields Department, Company, Location. The Department , Company, Location value wont change for any given server. Only status value will change.  I already have a query to get the output. Below query gives me unique status of each server. | eval FinalStatus = if(Status="Yes", 1, 0) | eventstats min(FinalStatus) as FinalStatus by ServerName | stats min(FinalStatus) as FinalStatus by ServerName | eval FinalStatus = if(FinalStatus=1, "Yes", "No") | table ServerName, FinalStatus   But what I want is whenever I search a department, or Company or Location, I need to get the Final Status count of each server based on these fields search.  for say, based on Location search, I need to get the final status count for a servers. if i search a Company, I should be able to get final status count for servers based on company.  I think its like  | search department="$department$"  Company="$Company$"  Location="$Location$"   Please help with spunk query. 
Have you tried the "Patterns" tab?  That can show patterns in your results if you have enough events for Splunk to analyze. If you have a finite set of events then you may be able to group them usin... See more...
Have you tried the "Patterns" tab?  That can show patterns in your results if you have enough events for Splunk to analyze. If you have a finite set of events then you may be able to group them using a case function and the stats command. ... | eval warn = case(match(_raw, "ConfigurationLoader - Deprecated configuration detected in path .*?. Please update your settings to use the latest configuration options.", "ConfigurationLoader - Deprecated configuration detected in path. Please update your settings to use the latest configuration options", match(_raw, "QueryExecutor - Query execution time exceeded the threshold .*", "QueryExecutor - Query execution time exceeded the threshold. Query:", match(_raw, "MemoryMonitor - High memory usage detected: .*? of allocated memory is in use. Consider increasing the available memory.", "MemoryMonitor - High memory usage detected: of allocated memory is in use. Consider increasing the available memory.", 1==1, _raw) | stats count by warn Of course, this requires you to know which warnings are of interest and becomes impractical when there is a large number of them.
Use the depends option. <panel depends="$someTokenThatNeverExists$"> ... </panel>  
Dear @glingaraj  Certifications are one of the important areas where Splunk learners/users (and Splunk itself) require more help.  So, we would like to hear more from you about your issues with Splu... See more...
Dear @glingaraj  Certifications are one of the important areas where Splunk learners/users (and Splunk itself) require more help.  So, we would like to hear more from you about your issues with Splunk Certifications: 1) have you completed your required cert exam or postponed your plans? 2) on either case, could you pls update your story here and then close this question pls(to move the question from unanswered to answered) thanks for being of any help to you,  best regards, Sekar
Hi @PickleRick  The SSL is the place users get more difficult/time-consuming issues and the least help.  the Splunk documentations SSL Certs is good one, but still looks like something missing.   ... See more...
Hi @PickleRick  The SSL is the place users get more difficult/time-consuming issues and the least help.  the Splunk documentations SSL Certs is good one, but still looks like something missing.    may i know if you are aware of any Splunk Conf's discussions around the SSL pls, thanks.    Best Regards Sekar
Hello again. I made sure that my centOS computer with Splunk SOAR installed and my MAC computer with Splunk Enterprise installed are on the same network. CentOS is installed on azure. I enabled my Ma... See more...
Hello again. I made sure that my centOS computer with Splunk SOAR installed and my MAC computer with Splunk Enterprise installed are on the same network. CentOS is installed on azure. I enabled my Mac computer to access the azure network with Virtual Network Gateway. CentOS and Mac computers can ping each other but I can't access port 8089. Do I need to do something with splunk enterprise for this?
How can I always hide a panel unconditionally? (f.i. a basic search panel)
Hi Splunk Experts, I've been trying to group "WARN" logs, but they have a pattern (Dynamic/ Argument values) in them. I'm aware of rex, but I don't want to manually rex for 1000s of such different e... See more...
Hi Splunk Experts, I've been trying to group "WARN" logs, but they have a pattern (Dynamic/ Argument values) in them. I'm aware of rex, but I don't want to manually rex for 1000s of such different events. I've even tried cluster, but that doesn't suits well my usecase. Any assistance would be much appreciated.!! Thanks in advance. 2024-08-31 12:34:56 WARN ConfigurationLoader - Deprecated configuration detected in path /xx/yy/zz. Please update your settings to use the latest configuration options. 2024-08-31 12:34:56 WARN ConfigurationLoader - Deprecated configuration detected in path /aa/dd/jkl. Please update your settings to use the latest configuration options. 2024-08-31 14:52:34 WARN QueryExecutor - Query execution time exceeded the threshold: 12.3 seconds. Query: SELECT * FROM users WHERE last_login > '2024-01-01'. 2024-08-31 14:52:34 WARN QueryExecutor - Query execution time exceeded the threshold: 21.9 seconds. Query: SELECT * FROM contacts WHERE contact_id > '252’. 2024-08-31 14:52:34 WARN QueryExecutor - Query execution time exceeded the threshold: 9.5 seconds. Query: SELECT * FROM users WHERE user_id = '123024001'. 2024-08-31 13:45:10 WARN MemoryMonitor - High memory usage detected: 85% of allocated memory is in use. Consider increasing the available memory. 2024-08-31 13:45:10 WARN MemoryMonitor - High memory usage detected: 58% of allocated memory is in use. Consider increasing the available memory. 2024-08-31 14:52:34 WARN QueryExecutor - Query execution time exceeded the threshold: 32.3 seconds. Query: SELECT * FROM users WHERE last_login > '2024-01-01'.   I wish to group them something like below to group similar events!! WARN  ConfigurationLoader Deprecated configuration detected in path. Please update your settings to use the latest configuration options  2 WARN  QueryExecutor Query execution time exceeded the threshold: . Query:  4 WARN  MemoryMonitor High memory usage detected: of allocated memory is in use. Consider increasing the available memory.  2
You can try something like below in rex command channel[^A-Za-z]+(?<channel_type>[^\\]+)  
 Hi All, Can anbody help us with the Regex expression to extract the feild of Channel: values will be either APP or Web which was highlighted in Sample logs below. Sample Log1: \\\":\\\"8E4B381542... See more...
 Hi All, Can anbody help us with the Regex expression to extract the feild of Channel: values will be either APP or Web which was highlighted in Sample logs below. Sample Log1: \\\":\\\"8E4B3815425627\\\",\\\"channel\\\":\\\"APP\\\"}\"","call_res_body":{}, Sample Log2: 4GksYUB7HGIfhfvs_iLtSc8EFCzOzbAJBze8wjXSDnwmgdhwjjxjsghqsxvhv\\\",\\\"channel\\\":\\\"web\\\"}\"","call_res_body":{},"additional_fields":{}}
Thank you for your reply. I understand it well!
1. Use regex101.com - it's a great tool for testing regexes. 2. Remember to escape backslashes and quotes if you use regex as a sting argument to the rex command. 3. Your regex would match three-di... See more...
1. Use regex101.com - it's a great tool for testing regexes. 2. Remember to escape backslashes and quotes if you use regex as a sting argument to the rex command. 3. Your regex would match three-digit-long parts of request path after the "//rest/" part (which doesn't appear in yiur events anyway), not the http method. 4. You need something like | rex "\\]\\s+(?<ActionTaken>\\S+)\\s/" (If you want to test it on regex101.com, remove extra backslashes)
It's a bit more complicated than that. Forwarder has (oversimplifying a bit) inputs, outputs and some queueing and buffering mechanics in between. Some inputs can (depending on their configuration) ... See more...
It's a bit more complicated than that. Forwarder has (oversimplifying a bit) inputs, outputs and some queueing and buffering mechanics in between. Some inputs can (depending on their configuration) block or not if they have nowhere to send to for further processing because, for example, the output isn't connected to anything and internal queues and buffers are full. Some input's can't (there's no possibility to block, for example, udp packets received from external sources). Typically file inputs block (it doesn't make much sense configuring them otherwise usually) of they have nowhere to send events downstream. But events already read don't have to be immediately sent to downstream receiver(s). They might be held in forwarder buffer. If you want to check the file inputs configuration and their state, do splunk list monitor and splunk list inputstatus
Java version openjdk 21-ea 2023-09-19 OpenJDK Runtime Environment (build 21-ea+23-1988) OpenJDK 64-Bit Server VM (build 21-ea+23-1988, mixed mode, sharing) Startup flags  java -Dappdynamics.jvm.... See more...
Java version openjdk 21-ea 2023-09-19 OpenJDK Runtime Environment (build 21-ea+23-1988) OpenJDK 64-Bit Server VM (build 21-ea+23-1988, mixed mode, sharing) Startup flags  java -Dappdynamics.jvm.shutdown.mark.node.as.historical=true -Dappdynamics.agent.log4j2.disabled=true -javaagent:/appdynamics/javaagent.jar From what I understand this version of the agent should work with openjdk21 but please correct me if i'm wrong.. any suggestions on what I can do to get this to startup? At startup I see below log. Which to me means the agent can't startup because of an incompatible java version Class with name [com.ibm.lang.management.internal.ExtendedOperatingSystemMXBeanImpl] is not available in classpath, so will ignore export access. java.lang.ClassNotFoundException: Unable to load class io.opentelemetry.sdk.autoconfigure.spi.ResourceProvider at com.singularity.ee.agent.appagent.kernel.classloader.Post19AgentClassLoader.findClass(Post19AgentClassLoader.java:88) at com.singularity.ee.agent.appagent.kernel.classloader.AgentClassLoader.loadClassInternal(AgentClassLoader.java:456) at com.singularity.ee.agent.appagent.kernel.classloader.Post17AgentClassLoader.loadClassParentLast(Post17AgentClassLoader.java:81) at com.singularity.ee.agent.appagent.kernel.classloader.AgentClassLoader.loadClass(AgentClassLoader.java:354) at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:526) at java.base/java.lang.Class.forName0(Native Method) at java.base/java.lang.Class.forName(Class.java:497) at java.base/java.lang.Class.forName(Class.java:476) at com.singularity.ee.agent.appagent.AgentEntryPoint.createJava9Module(AgentEntryPoint.java:800) at com.singularity.ee.agent.appagent.AgentEntryPoint.premain(AgentEntryPoint.java:639) at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) at java.base/java.lang.reflect.Method.invoke(Method.java:578) at java.instrument/sun.instrument.InstrumentationImpl.loadClassAndStartAgent(InstrumentationImpl.java:491) at java.instrument/sun.instrument.InstrumentationImpl.loadClassAndCallPremain(InstrumentationImpl.java:503) [AD Agent init] Fri Aug 30 20:35:48 UTC 2024[DEBUG]: JavaAgent - Setting AgentClassLoader as Context ClassLoader [AD Agent init] Fri Aug 30 20:35:48 UTC 2024[DEBUG]: JavaAgent - Setting AgentClassLoader as Context ClassLoader java.lang.IllegalArgumentException: Unsupported class file major version 65 at com.appdynamics.appagent/com.singularity.asm.org.objectweb.asm.ClassReader.<init>(ClassReader.java:199) at com.appdynamics.appagent/com.singularity.asm.org.objectweb.asm.ClassReader.<init>(ClassReader.java:180) at com.appdynamics.appagent/com.singularity.asm.org.objectweb.asm.ClassReader.<init>(ClassReader.java:166) at com.appdynamics.appagent/com.singularity.ee.agent.appagent.services.bciengine.asm.PreTransformer.preTransform(PreTransformer.java:49) at com.appdynamics.appagent/com.singularity.ee.agent.appagent.kernel.JavaAgent.preloadAgentClassesForDeadlockProneJVM(JavaAgent.java:656) at com.appdynamics.appagent/com.singularity.ee.agent.appagent.kernel.JavaAgent.initialize(JavaAgent.java:404) at com.appdynamics.appagent/com.singularity.ee.agent.appagent.kernel.JavaAgent.initialize(JavaAgent.java:347) at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) at java.base/java.lang.reflect.Method.invoke(Method.java:578) at com.singularity.ee.agent.appagent.AgentEntryPoint$1.run(AgentEntryPoint.java:656)
Thanks, that looks like its doing exactly what I was looking to replicate with my join from my older SPL that had fixed values in  the lookup file.
Hi, suppose a server with Splunk Forwarder on it, where lots of logs that haven't yet shipped to Splunk. Is there any way to get an output which lists the files/dirs, the current status (e.g. 50% sen... See more...
Hi, suppose a server with Splunk Forwarder on it, where lots of logs that haven't yet shipped to Splunk. Is there any way to get an output which lists the files/dirs, the current status (e.g. 50% sent to Splunk), etc.? I know I can see a list of files which are being monitored, but I'd like to get an idea of how much data the forwarded has yet to ship.
Try something like this (assuming your fields have been extracted already) | lookup FTP-Out FileName as FTPFileName OUTPUTNEW FileName Type Direction weekday | inputlookup FTP-Out append=t | eventst... See more...
Try something like this (assuming your fields have been extracted already) | lookup FTP-Out FileName as FTPFileName OUTPUTNEW FileName Type Direction weekday | inputlookup FTP-Out append=t | eventstats count(FTPFileName) as files by FileName | where files=0 OR isnotnull(FTPFileName) AND isnotnull(FileName) | fields - files  
This is great, thank you!! 
try something like... | rex field=_raw ".*\/rest\/(?<ActionTaken>\w+)"