All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team, I need some help to pull the top 10 index utilization (on an average of last 7 days) in a dashboard representation which should not include internal indexes and it should be in GB so kindly... See more...
Hi Team, I need some help to pull the top 10 index utilization (on an average of last 7 days) in a dashboard representation which should not include internal indexes and it should be in GB so kindly help out with the search query. And also similarly I need the Splunk License Usage by Host and Sourcetype in a Dashboard view (Last 7 days average data) in GB.   So kindly help out on the same.
can we setup an alert based on data from current time stamp & based on information on past 15mins ? say at T1, got a log event "a=2" say at T2, got a log event "a=3" i would like to check if ... See more...
can we setup an alert based on data from current time stamp & based on information on past 15mins ? say at T1, got a log event "a=2" say at T2, got a log event "a=3" i would like to check if at T2, if we have a T1 in past 15mins of T2 ?
Hi Team, We are getting below exception, but the agent is reporting data to the controller. I would like to know that what will be the impact with the error? How to resolve the issue? ERROR JavaAge... See more...
Hi Team, We are getting below exception, but the agent is reporting data to the controller. I would like to know that what will be the impact with the error? How to resolve the issue? ERROR JavaAgent - java.lang.ClassNotFoundException: com.singularity.ee.agent.crashdetect.JVMProcessPersistenceFile java.lang.ClassNotFoundException: com.singularity.ee.agent.crashdetect.JVMProcessPersistenceFile        at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)        at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)        at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)        at java.base/java.lang.Class.forName0(Native Method)        at java.base/java.lang.Class.forName(Class.java:398)        at com.appdynamics.appagent/com.singularity.ee.util.serialize.LookAheadObjectInputStream.resolveClass(LookAheadObjectInputStream.java:54)        at java.base/java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1997)        at java.base/java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1864)        at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2195)        at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1681)        at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:493)        at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:451)        at com.appdynamics.appagent/com.singularity.ee.agent.crashdetect.JVMProcessPersistenceFile.read(JVMProcessPersistenceFile.java:240)        at com.appdynamics.appagent/com.singularity.ee.agent.crashdetect.JVMProcessPersistenceFile.read(JVMProcessPersistenceFile.java:219)        at com.appdynamics.appagent/com.singularity.ee.agent.appagent.kernel.JavaAgentProcessPersistenceFileHandler.detectPreviousCrashedJVM(JavaAgentProcessPersistenceFileHandler.java:194)        at com.appdynamics.appagent/com.singularity.ee.agent.appagent.kernel.JavaAgentProcessPersistenceFileHandler.processJvmPersistenceFile(JavaAgentProcessPersistenceFileHandler.java:138)        at com.appdynamics.appagent/com.singularity.ee.agent.appagent.kernel.JavaAgent.createJavaAgentProcessPersistenceFileHandler(JavaAgent.java:584)        at com.appdynamics.appagent/com.singularity.ee.agent.appagent.kernel.JavaAgent.initialize(JavaAgent.java:485)        at com.appdynamics.appagent/com.singularity.ee.agent.appagent.kernel.JavaAgent.initialize(JavaAgent.java:355)        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)        at java.base/java.lang.reflect.Method.invoke(Method.java:566)        at com.singularity.ee.agent.appagent.AgentEntryPoint$1.run(AgentEntryPoint.java:658)
using Http Event Collector, splunk loggin driver consumes too much CPU: 94-100% anyone knows what's the reason behind it and how to det it down?
Hi All, I have an issue which i am unable to resolve. I have a lookup with two columns: Process_Command_Line, score Under 'Process_Command_Line', the values are wild carded e.g. *net user*. ... See more...
Hi All, I have an issue which i am unable to resolve. I have a lookup with two columns: Process_Command_Line, score Under 'Process_Command_Line', the values are wild carded e.g. *net user*. The 'score' column has an arbitrary numerical value added. The SPL is working with the 'Process_Command_Line' wild carded values and we only see events relevant to the Lookup values, but I cannot get the score value to be also visible. Is there something fundamentally incorrect with the SPL i am using: index=wineventlog source="WinEventLog:*" EventCode=4688 Process_Command_Line!="" user!="-" user="*123serviceProd*" [| inputlookup SuspiciousDiscoveryActivity.csv | fields Process_Command_Line] | dedup Process_Command_Line | lookup SuspiciousDiscoveryActivity.csv Process_Command_Line as Process_Command_Line OUTPUTNEW score thanks.
I am trying to get the data only when my lastlogon(field name) is Null. but the above query is still giving me data for both Null and non Null values.
Issue : In the _internal logs we have logs from all Splunk UF and Splunk Enterprise components. The _internal logs from Splunk UF we do not want for more than 15 days. But _internal logs from Splunk ... See more...
Issue : In the _internal logs we have logs from all Splunk UF and Splunk Enterprise components. The _internal logs from Splunk UF we do not want for more than 15 days. But _internal logs from Splunk Enterprise components such as CM/LM/MC, IDX, SH, SHC, DS, HF we want to store for longer duration for analysis purpose. Now if we send _internal logs of Splunk enterprise component to different index then a lot of out-of-box searches in MC will not run as it should be. Even the inbuilt license query will be effected. We have tried using mcollect to send the _internal logs to another index but then sourcetype is changed to Stash.  Please let me know if there is a way to do it??
I have a query that I am using to get the count of events index=system source=/var/log/syslog/* | rex field=source "(?<host_name>[^\"]*)" | stats count by host_name Now have a lookup file hf_lo... See more...
I have a query that I am using to get the count of events index=system source=/var/log/syslog/* | rex field=source "(?<host_name>[^\"]*)" | stats count by host_name Now have a lookup file hf_lookup.csv where there is column as hf_name. This hf_name is the host_name from the above query. I want to get the count against each value of hf_name. Even if the count is 0 for a hf_name, it should be displayed as 0. Tried using inputlookup with left join and "fillnull value=0 count" but either I am only getting count=>1 or for the hosts that are not in hf_lookup.csv.
Hi Splunkers, my colleague and I are going to perform, this week, a change to forward data from Splunk HF to a third party system, in this case a UEBA product. In this scenario, we have to forward n... See more...
Hi Splunkers, my colleague and I are going to perform, this week, a change to forward data from Splunk HF to a third party system, in this case a UEBA product. In this scenario, we have to forward not all data, but only some subsets. How to perform this is well explained in the official doc, here , so the purpose of my post is not to understand how to do this. Reading the guide, I found some point that are not completely clear, so I kindly ask you to help me to understand. Paragraph "Forward a set of data"; inside the file    transforms.conf​   we need to insert the following dest key:   DEST_KEY=_TCP_ROUTING   Do we need this because we are performing a tcp forwarding, as stated in the  file   outputs.conf   with stanza    [tcpout]   ? I mean, all times I need to perform a tcp forwarding, I must use always a stanza "tcpoutput" and I need a dest_key like the above one in case of data subsets?   What about if I need to perform a UDP forwarding? is it possible? If yes, How should I change stanzas in files? I mean, I can see that using syslog I could achieve this, but what about if I cannot/I don't want to use it?  
  I am scheduling this at 9.00 AM everyday using splunk DB connect .When i see the sourcetype nextday at 9.00 AM getting doubled up data ( like 700 rows 350 + 350 )..   Can some one help me to ... See more...
  I am scheduling this at 9.00 AM everyday using splunk DB connect .When i see the sourcetype nextday at 9.00 AM getting doubled up data ( like 700 rows 350 + 350 )..   Can some one help me to resolve this issue .I need to have next days data got refreshed after 9.00 AM without doubling up .
Hello, Panels are not showing/hiding based on the selection of the multiselect input. <form> <label>Multiselect input to hide/show multiple panels</label> <search> <query> | makeresults | ... See more...
Hello, Panels are not showing/hiding based on the selection of the multiselect input. <form> <label>Multiselect input to hide/show multiple panels</label> <search> <query> | makeresults | fields - _time | eval data="$service_tok$" | eval condition=case(match(data,"\*"),"show_all",match(data,"Windows") AND match(data,"NIX") AND match(data,"VMWare"),"show_all",match(data,"Windows"),"Windows",match(data,"NIX"),"NIX",match(data,"VMWare"),"VMWare") | eval show_all=case(condition="show_all","true") | eval show_windows=case(condition="Windows" OR condition="show_all","true") | eval show_nix=case(condition="NIX" OR condition="show_all","true") | eval show_vmware=case(condition="VMWare" OR condition="show_all","true") </query> <done> <condition match="$job.resultCount$!=0"> <eval token="tokShowAll">case(isnotnull($result.show_all$),$result.show_all$)</eval> <eval token="tokShowWindows">case(isnotnull($result.show_windows$),$result.show_windows$)</eval> <eval token="tokShowNIX">case(isnotnull($result.show_nix$),$result.show_nix$)</eval> <eval token="tokShowVMWare">case(isnotnull($result.show_vmware$),$result.show_vmware$)</eval> </condition> <condition> <unset token="tokShowAll"></unset> <unset token="tokShowWindows"></unset> <unset token="tokShowNIX"></unset> <unset token="tokShowVMWare"></unset> </condition> </done> </search> <fieldset submitButton="false"> <input type="time" token="field1"> <label></label> <default> <earliest>-60m@m</earliest> <latest>now</latest> </default> </input> <input type="multiselect" token="service_tok" searchWhenChanged="true"> <label>Select a Service</label> <choice value="*">All</choice> <choice value="Windows">Windows</choice> <choice value="NIX">NIX</choice> <choice value="VMWare">VMWare</choice> <change> <unset token="tokShowAll"></unset> <unset token="tokShowWindows"></unset> <unset token="tokShowNIX"></unset> <unset token="tokShowVMWare"></unset> </change> <default>*</default> <initialValue>*</initialValue> <delimiter> </delimiter> </input> </fieldset> <row> <panel> <title></title> </panel> </row> <row depends="$tokShowWindows$"> <panel> <table> <title>Windows Request Count</title> <search> <query>host=abcd source="/access.log*" | timechart span=1hr count by host</query> <earliest>-4h@m</earliest> <latest>now</latest> </search> </table> </panel> </row> <row depends="$tokShowNIX$"> <panel> <table> <title>NIX Request Count</title> <search> <query>host=abcd source="access.log" | timechart span=1hr count by host</query> <earliest>-4h@m</earliest> <latest>now</latest> </search> </table> </panel> </row> <row depends="$tokShowVMWare$"> <panel> <table> <title>VMWare Request Count</title> <search> <query>host=abcd source="access.log" | timechart span=1hr count by host</query> <earliest>-4h@m</earliest> <latest>now</latest> </search> </table> </panel> </row> </form> @ITWhisperer 
Hello Splunkers!! We have to fetch the events from the third party system through http event collect. What we want to achieve. We have two VM machine. In first VM we have third party system data ... See more...
Hello Splunkers!! We have to fetch the events from the third party system through http event collect. What we want to achieve. We have two VM machine. In first VM we have third party system data collector is installed and we need to fetch the data from first VM server to second VM server in Splunk. Please let me know how can I approach this.    
Hi Team, I have to do auto field extraction of the fields coming inside the payload under <mTypes>....</mTypes> to the corresponding values which are coming under <Results>........</Results>   ... See more...
Hi Team, I have to do auto field extraction of the fields coming inside the payload under <mTypes>....</mTypes> to the corresponding values which are coming under <Results>........</Results>   <mTypes>field_1 field_2 field_3 field_4</mTypes> some random paylod <Results>12 12 9 3</Results>   Kindly suggest, thanks in advance
We have two events query Start event Index=x source type= xx "String" extacted fields s like manid,actionid,batch I'd End event  Index=y source type=y " string recived" extacted fields like m... See more...
We have two events query Start event Index=x source type= xx "String" extacted fields s like manid,actionid,batch I'd End event  Index=y source type=y " string recived" extacted fields like manid ,actionid   Calculate different between start and end events group by manid and count number of mandid exceeding different above 30 sec .  | Table _time manid duration index=x source type= xx "String") OR (index=y source type=y " string recived") | stats values(_time) as time values(actionid) as actionid values(batchid) as batchid by manid | eval duration = max(time) - min(time)|eval excessive = if(duration > 30, duration, null()) | stats count(excessive) as excess_count avg(excessive) as excess_avg by manid But unable get _time vaules   
Hi Legends How do I give bit more meaningful names for fields last_sum and first_sum in below query? i.e. something like sum_February and sum_March?  Is there a way to use the value of date_mon... See more...
Hi Legends How do I give bit more meaningful names for fields last_sum and first_sum in below query? i.e. something like sum_February and sum_March?  Is there a way to use the value of date_month field in a search?  streamstats current=f window=1 last(sum) as last_sum  first(sum) as first_sum
I have a problem, I recently started using the Splunk Theat Intelligence Management (TRU STAR) platform, which is our IOC management tools that contain different sources of intelligence. The tool han... See more...
I have a problem, I recently started using the Splunk Theat Intelligence Management (TRU STAR) platform, which is our IOC management tools that contain different sources of intelligence. The tool handles different private enclaves, in these private enclaves I was testing and now I want to delete the iocs that are inside these enclaves, however I can't find how. Has anyone been through this and can help me. Thank you so much.
Hello, Does upgrading Splunk 8 to Splunk 9 ships with new root CA or renews default Root CA like cacert.pem? Testing on fresh 9.0.4.1 getting error after deleting cacert.pem and server.pem : "T... See more...
Hello, Does upgrading Splunk 8 to Splunk 9 ships with new root CA or renews default Root CA like cacert.pem? Testing on fresh 9.0.4.1 getting error after deleting cacert.pem and server.pem : "The CA file specified (/opt/siem/splunk/etc/auth/cacert.pem) does not exist. Cannot continue. SSL certificate generation failed." Thanks.
is it possible to issue on-prem trial license? if so how can i get it? thanks
I have a problem installing Splunk Enterprise.  when I come to the part accepting the agreement, there was no part with yes or no after I scrolled down to 100%.  How to show this yes agreement for me... See more...
I have a problem installing Splunk Enterprise.  when I come to the part accepting the agreement, there was no part with yes or no after I scrolled down to 100%.  How to show this yes agreement for me to set up a username and password? I use /opt/Splunk/bin --accept license but no yes at the end to agree to the agreement.    your help is much appreciated.  
Easiest way to exclude ingestion of events for a specific IP address from a SourceType at UF level OR Syslog-NG