All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have the latest version of PCI Compliance installed. But when accessing the Report of the Requirement, the Panel notices: "Search is waiting for input ".  I have tried reinstalling both PCI Compli... See more...
I have the latest version of PCI Compliance installed. But when accessing the Report of the Requirement, the Panel notices: "Search is waiting for input ".  I have tried reinstalling both PCI Compliance and ES but the problem is still not fixed.  Anyone have a solution to this problem? Thank
I am setting up a Splunk Stream. I am having trouble with the official instructions, which are very confusing for a beginner. Below is the environment that has already been set up. Server A XAMPP... See more...
I am setting up a Splunk Stream. I am having trouble with the official instructions, which are very confusing for a beginner. Below is the environment that has already been set up. Server A XAMPP DVWA UF(ver9.0.4) Server B Splunk(ver9.0.4) Stream(8.1.0) → to be installed I would like to deploy Stream on server B to analyze DVWA logs sent from UF on server A. Can someone please itemize and explain the necessary steps? I know this is a rudimentary question, but please help.
Hi, I have an issue of the connection between an installed AppAgent (Java) in Application Server (On-Premise, Linux CentOS), which is already mapped to Public IP, and SaaS Controller (License expire... See more...
Hi, I have an issue of the connection between an installed AppAgent (Java) in Application Server (On-Premise, Linux CentOS), which is already mapped to Public IP, and SaaS Controller (License expires : 14/02/2024) By the way I've already set up proxy on /etc/profile, and my Application Server be able to curl the SaaS Controller, but the following errors still occur like there's another configures i need to set up more but I have no idea Y_Y : [AD Thread Pool-Global0]  WARN ConfigurationChannel - Could not connect to the controller/invalid response from controller, cannot get initialization information, controller host [SaaS domain], port[443] [AD Thread Pool-Global0] ERROR NetVizAgentRequest -  Fatal transport error while connecting to URL [http://127.0.0.1:3892/api/agentinfo?timestamp=0&agentType=APP_AGENT&agentVersion=2.4.0]: org.apache.http.conn.HttpHostConnectException: Connect to 127.0.0.1:3892 [/127.0.0.1] failed: Connection refused (Connection refused) [AD Thread Pool-Global1] WARN NetVizConfigurationChannel - NetViz-ConfigChannel-039 : NetViz configuration retrieval failed with reason [Connection back off limitation in effect: /controller/instance/null/netvizagentconfiguration]
I want to create this graph in splunk can some one please help me . Required graph  The one that i am getting after writing the following query is this Query - index="BTS-card-account-update" ... See more...
I want to create this graph in splunk can some one please help me . Required graph  The one that i am getting after writing the following query is this Query - index="BTS-card-account-update" exception="*" ("Payment instrument not found" OR "Wallet already has the updated card") | timechart count by host Graph after my qurey can some one please tell me how to get two separate line for both kind of exception separately. Thanks in advance    
A newly created KVStore collection is not returning matches for a lookup command, despite the fact it's populated. For example: | inputlookup my_kvstore​​​​ Returns the following results:   ... See more...
A newly created KVStore collection is not returning matches for a lookup command, despite the fact it's populated. For example: | inputlookup my_kvstore​​​​ Returns the following results:   field_1 field_2 field_3 Abc Def Hij Therefore, I would expect to be able to lookup field_1 and get the same results. | makeresults | eval field_1 = "Abc" | fields - _time | lookup my_kvstore field_1​​​​​ Instead, I get: field_1 field_2 field_3 Abc     To rule out any typos, I even tried: | inputlookup my_kvstore | table field_1 | lookup my_kvstore field1 OUTPUT field_1 AS new_field​​​​​ But that returns: field_1 new_field Abc   As for the configuration: ## collections.conf ## [my_kvstore] field.field_1= string field.field_2= string field.field_3 = string replicate = true disabled = 0 ## transforms.conf ## [my_kvstore] collection = my_kvstore external_type = kvstore fields_list = field_1,field_2,field_3 case_sensitive_match = 0 I'm at a loss, but before I go down the support route, I'd appreciate any hel    
I have an app where users of different roles want to share their dashboards and reports with each other. However if I allow them to, they would be able to share their objects with everyone or all use... See more...
I have an app where users of different roles want to share their dashboards and reports with each other. However if I allow them to, they would be able to share their objects with everyone or all users.  Is there a way to only limit them the option to share it just to their own role? Alternatively I was thinking of using a custom command that has admin credentials to change the permissions but that would require hardcoding admin creds in the command. Is there a better way to store the admin credentials? I know I can't encrypt the passwords in storage/passwords because then I would need to allow the user to have that capability. 
I am trying to perform an automatic lookup on IP field against two lookup definitions/tables. One is a list of IPs with their department, and another is a list of networks with their department (with... See more...
I am trying to perform an automatic lookup on IP field against two lookup definitions/tables. One is a list of IPs with their department, and another is a list of networks with their department (with CIDR match configured). I would like the two automatic lookups to use the following logic. - If there is match on the IP list, use the department from that IP record. - Else if there is a CIDR match on the network list, use the departmnet from that network record. - Else if there are no matches from either, do nothing (default behavior). Here's an example of the lookup text for each - ip_list ip OUTPUT(NEW) dept AS ip_dept - network_list network AS ip OUTPUT(NEW) dept AS ip_dept I tried doing OUTPUT on the ip_list, and OUTPUTNEW on the network_list, but that excludes network lookups. I tried doing OUTPUTNEW on both, hoping for an alphabetical order of operations, but that doesn't seem to be working either. Any ideas would be appreciated - thank you!
I have a user who wants to send a table resulting from | stats values() to a summary index via the collect command, but all of the logs in this summary index need to be in json format. By default, co... See more...
I have a user who wants to send a table resulting from | stats values() to a summary index via the collect command, but all of the logs in this summary index need to be in json format. By default, collect just separates the field-value pairs by commas. How would we format these in json before or after the collect command sends them to the summary index?
What is the better way to show big message nearly 15,000 chars  in one cell of splunk dashboard?
<search> | eval vm_unit=case(vmSize="Standard_F16s_v2",2,vmSize="Standard_F8s_v2",1,vmSize="Standard_F4s",0.5,vmSize="Standard_F2s_v2",0.25) | timechart span=1h dc(vm_name) sum(vm_unit) as USED_VMS ... See more...
<search> | eval vm_unit=case(vmSize="Standard_F16s_v2",2,vmSize="Standard_F8s_v2",1,vmSize="Standard_F4s",0.5,vmSize="Standard_F2s_v2",0.25) | timechart span=1h dc(vm_name) sum(vm_unit) as USED_VMS Looking for the sum of vm_unit for distinct VM's by the hour. But it considers all the VM's instead of distinct VM's.
Why does Networktoolkit App open when i use PageDuty App??
Hi Splunkers, I want to create a search that send results to an "On call" system only for out of hours during monday to Friday from 5:30PM until the next day at 8:30AM and also 24h during the week... See more...
Hi Splunkers, I want to create a search that send results to an "On call" system only for out of hours during monday to Friday from 5:30PM until the next day at 8:30AM and also 24h during the weekend starting on Friday at 5:30PM until Monday at 8:30AM. so basically I don't want to send any results during bussiness hours from 8:30AM till 5:30PM Mon-Friday.  I am not sure if it's easier to set this up using cron time scheduler when I have my search ready or using earliest and latest  and some eval command within the search. Also wondering if this can be achieve within 1 search or should I create 1 for monday to friday and another one for the weekend given that the time ranges are different? Could Anyone have an idea how to best achieve this? Much appreciate it. 
Hi Team, I would like to drop/trim .png and .jpg files in the output result. will be appreciated if you could help with regex or any other idea and solution.
Hi team, KV strore is getting failed frequently on both SHs and indexers after upgrading to 9.0.2 . Can some help if any known resolution .  
I'm pretty new to Splunk ES, and have a pretty basic question. How do I set up an adaptive response for every new notable event to send an email to a dlist? I see the option to add an adaptive resp... See more...
I'm pretty new to Splunk ES, and have a pretty basic question. How do I set up an adaptive response for every new notable event to send an email to a dlist? I see the option to add an adaptive response/email to each correlation search, but I am trying to configure it in one place to have an email sent for any new notable event that links back to the alert on the Incident Review screen Any guidance is appreciated. Thanks.
I have a splunk search query which shows the details but the problem here is it only shows the results if the hostname passed in the text box is with fqdn. If hostname entered is without fqdn it won'... See more...
I have a splunk search query which shows the details but the problem here is it only shows the results if the hostname passed in the text box is with fqdn. If hostname entered is without fqdn it won't show any result. How do I make the query to work if I pass abc123.xyz.com or abc123. Apologizes if it's already answered, very new to Splunk.
Dear Colleagues  Help write a query to get data about all reports and alerts  I need to get information e.g. 1. Execution time of each report and alert 2. How much does a completed report a... See more...
Dear Colleagues  Help write a query to get data about all reports and alerts  I need to get information e.g. 1. Execution time of each report and alert 2. How much does a completed report and alerts and stuff like that tried to find information in the monitoring console But did not find information about each report and alert I will be grateful !
Hello Team, I am new to Kubernetes and splunk, I have a requirement to push logs that are generated from my spring boot app running under k8s pods to splunk, How can I forward the logs that are gen... See more...
Hello Team, I am new to Kubernetes and splunk, I have a requirement to push logs that are generated from my spring boot app running under k8s pods to splunk, How can I forward the logs that are generating under pod ? I can access the logs by using the command  kubectl logs <pod-name>
We are in the process of deploying DCNs to collect telemetry on our VMware environment.  As we deal with the relatively hefty system requirements, I am wondering why this is a better option than rely... See more...
We are in the process of deploying DCNs to collect telemetry on our VMware environment.  As we deal with the relatively hefty system requirements, I am wondering why this is a better option than relying on our SNMP monitoring tool (that is integrated with Splunk)?  Doing so would save us the resource overhead and the single points of failure that the DCNs create. Any features or metrics that we'll forego if we don't use DCNs?
There is this common belief that too many indexes cause performance issues. Is it true and what are the recommendations?