All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

In our distributed enterprise Splunk environment we have a log file being generated on each Splunk host (indexers, search head, deployment server, etc) located at: /opt/splunk/var/log/splunk/foo.log ... See more...
In our distributed enterprise Splunk environment we have a log file being generated on each Splunk host (indexers, search head, deployment server, etc) located at: /opt/splunk/var/log/splunk/foo.log By default this gets logged to _internal using the foo-too_small source type. We now want to change the source type to one we created (my:custom:sourcetype).  I have created the following props.conf file on the deployment server as a custom app and deployed successfully via apply cluster-bundle.  However, new log data is still being associated with the existing source type of foo-too_small.  We also set the local.meta file (under metadata) for permissions. I have verified this file is making it to the indexers in peer-apps.   [my:custom:sourcetype] TIME_FORMAT = %Y-%m-%d %H:%M:%S,%3N MAX_TIMESTAMP_LOOKAHEAD = 25 [source::.../var/log/splunk/foo.log] sourcetype = my:custom:sourcetype   Questions: Why isn't this working? What needs to be done instead to change to a custom source type? Thank you in advance!
How can we access parameters of a particular input from inputs.conf file so that we can disable that input after its first run or in case of any use case. 
Hello Splunkers,  I was reading the documentation for Splunk connect for Kafka and from the configuration it looks like it uses a push integration.    I was wondering if it is possible to chang... See more...
Hello Splunkers,  I was reading the documentation for Splunk connect for Kafka and from the configuration it looks like it uses a push integration.    I was wondering if it is possible to change it to pull integration. In retrospect Splunk to be pulling the data from Kafka instead of the other way around. This question came in one of the discussions for onboarding data from Kafka.  I would really appreciate any input you can share regarding the topic.  Best regards,  Nikolay
status=4  | eval MSGStatus=case(status=1,"CREATED", status=2,"RUNNING", status=3,"CANCELLED", status=4,"Failed", status=5,"PENDING", status=6,"ENDED UNEXPECTEDLY", status=7,"SUCCEEDED",status=8,"STOP... See more...
status=4  | eval MSGStatus=case(status=1,"CREATED", status=2,"RUNNING", status=3,"CANCELLED", status=4,"Failed", status=5,"PENDING", status=6,"ENDED UNEXPECTEDLY", status=7,"SUCCEEDED",status=8,"STOPPING", status=9,"COMPLETED") |join package_name [inputlookup Azure_VOC.csv] | eval STARTTime=strptime((strftime(now(),"%Y-%m-%d")),"%Y-%m-%d") - strptime(start_time,"%Y-%m-%d")| table Azure_Pipeline_name, MSGStatus, end_time, start_time   query running every 15mins and counting same failure
My KV store lookup defination files are not working and giving me below error. Below are current status : Please help me what I need to fix so all my KV store lookups will work.  
So I have a tabular chart as below with component, basket and ageing for 1 to 10 days. So basically i am finding out the ageing of each component and also a basket filter to filter out the compone... See more...
So I have a tabular chart as below with component, basket and ageing for 1 to 10 days. So basically i am finding out the ageing of each component and also a basket filter to filter out the component based on different basket. But i don't want the basket column to be in my tabular chart like if u can see the last 2 rows are of same component but different basket now i want a single row of each component and i will add a filter using the basket column and it has to filter out according but my below tabular chart should be change to a tabular chart having only component column and age column (and only one row for each component). So is it possible to do it. If yes, please help out.  
/etc/ansible/Appdynamics/tomcat/bin/catalina.sh: line 506: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.292.b10-1.el7_9.x86_64/jre/bin/java: No such file or directory JRE path is correct but still getting ... See more...
/etc/ansible/Appdynamics/tomcat/bin/catalina.sh: line 506: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.292.b10-1.el7_9.x86_64/jre/bin/java: No such file or directory JRE path is correct but still getting this error.
I have the below values in Excel for which Fvalue is given by FTest function of excel. I want to replicate the result in Splunk. Please help A B 0 0.693147 0 0.693147 1.098612 0.693... See more...
I have the below values in Excel for which Fvalue is given by FTest function of excel. I want to replicate the result in Splunk. Please help A B 0 0.693147 0 0.693147 1.098612 0.693147 0.693147 1.609438 0   0   0.693147   0.693147   0   0   0   0   0.693147   1.386294   0   0   0   1.098612   0.693147   0   1.098612   0.693147   0   0.693147   1.386294   FTest Result TTestResult 0.979190321 0.081250351   Two Arrays are there A and B Function in excel : FTest(Array1, Array2) = 0.979190321 TTest(Array1, Array2, 2, 2)=0.081250351
We have a Synology NAS running on DSM 6.2.4-25556 Update 6 It has the feature to send logs to an IP address. I was wondering if it would be possible to use Splunk to monitor our NAS for suspicious ... See more...
We have a Synology NAS running on DSM 6.2.4-25556 Update 6 It has the feature to send logs to an IP address. I was wondering if it would be possible to use Splunk to monitor our NAS for suspicious activities such as a user downloading large amounts of files, logging in from a different location and so forth.   Below you can see the Log Sending available on the Synology DSM 6.2   Would this be compatible with Splunk? Would we need to host the Splunk server ourselves or does this work with a cloud solution?   
As part of the migration of the Splunk infrastructure, would it be possible to obtain a copy of the license, in order to transfer it to the new License Manager
I need to know how we can pause the search for 30 seconds and then run the saved search  for example, i have a search scheduled at 9:30:00 my requirement is that the search should pause for 30 secon... See more...
I need to know how we can pause the search for 30 seconds and then run the saved search  for example, i have a search scheduled at 9:30:00 my requirement is that the search should pause for 30 seconds and run at 9:30:30 seconds.  It would be more helpful if anyone help me in resolving this!! happy Splunking!!
Good day Splunkers , We have a Data flow coming from the source A to Kakfa Topic. Splunk Connector on the kafka using HEC  Token to forward data from the Kafka Topic to Splunk HF.  Sourcetype if spe... See more...
Good day Splunkers , We have a Data flow coming from the source A to Kakfa Topic. Splunk Connector on the kafka using HEC  Token to forward data from the Kafka Topic to Splunk HF.  Sourcetype if specified  while configuring the HEC. This source event has huge volume , and have many key-value pairs , To Manage the High  ingestion Volume , I need to apply truncate feature on all these events  at the heavy forwarder layer before it reaches indexing layer. Is it possible to choose only selected fields from these events and have them indexed ? is it possible to use script applied on the source type to format the data which is coming from HEC input at the HF level ?
How to monitor class and method level of a java process
Does the network resolution datamodel includes both Outbound and Inbound DNS transfers?
hi  I have the below Query to get the required output except one column. Query: index="general_prod" source="osblogprod" sourcetype="csv" | extract pairdelim="," kvdelim="=" | where like (REASO... See more...
hi  I have the below Query to get the required output except one column. Query: index="general_prod" source="osblogprod" sourcetype="csv" | extract pairdelim="," kvdelim="=" | where like (REASON ,"%ORA-00001%") | eval DATE=strftime(_time, "%Y-%m-%d") | eval S.NO=1 | accum S.NO |dedup MESSAGEIDENTIFIER| table S.NO,DATE,BUSINESSIDENTIFIER,MESSAGEIDENTIFIER,SERVICELAYEROPERATION,CHARGETYPE,REASON|rename BUSINESSIDENTIFIER AS "Order ID", SERVICELAYEROPERATION AS "API NAME", REASON AS "OSB Observation"   I need to extract charge type from the OSB Observation column. Sample values are RSCN4,RSCN3. Kindly help me with the rex command.
I'm new to splunk, can anyone help me to make convert time from CEST to IST using query.
Hi  I intend to mute the alerts at a specified time during the maintenance window, and they should start up again once the maintenance window is to finish. Can someone please guide me on how to a... See more...
Hi  I intend to mute the alerts at a specified time during the maintenance window, and they should start up again once the maintenance window is to finish. Can someone please guide me on how to achieve this?  because Splunk doesn't have a feature for scheduling maintenance. Thanks
Hello experts, I have a requirement to deploy splunk all-in-one with a license of 30gb/day. The available OS is Oracle Linux 8.x and Oracle Linux 9.x with kernel version: 5.15.0-6.80.3.1.el8uek.x86... See more...
Hello experts, I have a requirement to deploy splunk all-in-one with a license of 30gb/day. The available OS is Oracle Linux 8.x and Oracle Linux 9.x with kernel version: 5.15.0-6.80.3.1.el8uek.x86_64. Does splunk enterprise support the the it. Need your inputs and any please suggest if it has any issues. Thanks Biswa
How to calculate 90 percentile and average on the same query. following query is not providing 90 percentile values  index="dynatrace" sourcetype="dynatrace:usersession" | spath output=pp_user_acti... See more...
How to calculate 90 percentile and average on the same query. following query is not providing 90 percentile values  index="dynatrace" sourcetype="dynatrace:usersession" | spath output=pp_user_action_user path=userId | spath output=user_actions path="userActions{}" | stats count by user_actions | spath output=pp_user_action_application input=user_actions path=application | where pp_user_action_application="xxxx" | spath output=pp_user_action_key input=user_actions path=keyUserAction | where pp_user_action_key="true"| spath output=pp_user_action_name input=user_actions path=name | spath output=pp_user_action_response input=user_actions path=visuallyCompleteTime | eval pp_user_action_name=substr(pp_user_action_name,0,150) | eventstats perc90(pp_user_action_response) AS "90perc_User_Action_Response" by pp_user_action_name | stats count(pp_user_action_response) As "Total_Calls" ,avg(pp_user_action_response) AS "Avg_User_Action_Response" by pp_user_action_name | eval Avg_User_Action_Response=round(Avg_User_Action_Response,0) ```| eval 90perc_User_Action_Response=round(90perc_User_Action_Response,0) ``` | table pp_user_action_name,Total_Calls,Avg_User_Action_Response,90perc_User_Action_Response
I can load a Sysmon  log into Splunk as a lookup table, but how do I view it after that? What code do I use to view the log in search?