All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I feels this such a noob question but just cannot find my answer. I want to include the earliest and latest datetime criteria in the results.  The results of the bucket _time span does not gu... See more...
Hello, I feels this such a noob question but just cannot find my answer. I want to include the earliest and latest datetime criteria in the results.  The results of the bucket _time span does not guarantee that data occurs.  I want to show range of the data searched for in a saved search/report. index=idx_noluck_prod source=*nifi-app.log* APILifeCycleEventLogger "Event Durations(ms)" API=/v*/payments/ach/* | bucket _time span=day |stats count(eval(EndToEnd < 1200)) as EndToEnd_Completed_1.2-Seconds, count(eval(EndToEnd)) as Total_Transactions by ClientId,_time Thank you all in advance for increasing my understanding and knowledge. Steven    
Hello Splunkers, Do I need domain account  for UF to monitor Domain Controller ? I suppose, I need UF on domain account when monitoring AD. If I only wish to monitor windows standard even logs fro... See more...
Hello Splunkers, Do I need domain account  for UF to monitor Domain Controller ? I suppose, I need UF on domain account when monitoring AD. If I only wish to monitor windows standard even logs from DC then local system account is enough. My DS would be on Linux based, do I need another new  DS on windows to push apps on DC?? In docs it has been mentioned that Splunk Enterprise should be on Windows to monitor DC, do they only mean it for indexer??? Or DS should also be  Windows based?  
I've encountered several people who have had trouble getting DBX installed. I took some extra time and created a guide covering installation on a All In One configuration. https://gist.github.com/Ja... See more...
I've encountered several people who have had trouble getting DBX installed. I took some extra time and created a guide covering installation on a All In One configuration. https://gist.github.com/JacobCarrell/73cb46eb0dded139a487d5a36e0ab474 Feedback is welcome.
Just a quick question. I have no experience on Splunk, but my company just use it to collect data. My Splunk Query search :   sourcetype=st_file| fillnull clientTransactionID msisdn ocsID orderID ... See more...
Just a quick question. I have no experience on Splunk, but my company just use it to collect data. My Splunk Query search :   sourcetype=st_file| fillnull clientTransactionID msisdn ocsID orderID applicationID productType providerName reponseCode responsedetail reponseMessage actiontype bNumber gatewayTransactionID | stats count as trx by _time clientTransactionID msisdn ocsID orderID applicationID productType providerName reponseCode responsedetail reponseMessage actiontype bNumber gatewayTransactionID     Then the result will be export to csv file like this :   "_time",clientTransactionID,msisdn,ocsID,orderID,applicationID,productType,providerName,reponseCode,responsedetail,reponseMessage,actiontype,bNumber,gatewayTransactionID,trx "2020-10-17T17:20:00.000+0700",023029300002187960,6281220636564,TC01,0,RBT0000,SP,RBT,2,"0|1630429199000|65","CHARGING_SUCCEEDED",F,000,"RBT0000:023029300002187960",1    Can anyone suggest me how to change delimiter from '"," to "|" on my search query?  I already read that we can change it from conf. files but since  I can't get access to those files, so i have no clue here.    Thank you
There are some liberty services and in some host we have many microservice , I want to monitor CPU / memory usage in the particular host , is there a splunk query that will help me get this
Hi Team, Since we don't have direct integration with remedy, we are using an HTTP request template to integrate remedy with AppDynamics. We have requirement here that, we would like to attach a... See more...
Hi Team, Since we don't have direct integration with remedy, we are using an HTTP request template to integrate remedy with AppDynamics. We have requirement here that, we would like to attach a runbook for each action triggered while creating a ticket with remedy. I don't find any option to attach documents in the HTTP request template. Any idea how can we achieve this??
I am looking for a way to import and create a new API doc in drupal 8, whenever we have a new API product in APIGEE edge. In a way, I'm looking for synchronization between drupal portal and the Ap... See more...
I am looking for a way to import and create a new API doc in drupal 8, whenever we have a new API product in APIGEE edge. In a way, I'm looking for synchronization between drupal portal and the Apigee edge in terms of API products just like apps.
Hi, I have two entries for this productid, Is it possible to consolidate to one entry maybe with evals? productid field1 field2 field3 abc test     abc   week1 3  
I am very new to Splunk. I have two log files, the first one,  let's call it accessLog, contains the access log for the http requests A splunk query could give me the count for each request. url... See more...
I am very new to Splunk. I have two log files, the first one,  let's call it accessLog, contains the access log for the http requests A splunk query could give me the count for each request. url                                        count http://host1/query1      10 http://host1/query2      20 The second log file, let's call it errorLog, contains only error for the request, the line contains a keyword of the url. A Splunk query could give me the result: keyword           errorCount query1              2 query2              8 I want to calculate the success ratio for each request: URL                                                success ratio http://host1/query1              80% http://host2/query2              60% It could be described as the following sql Select url,  count(url),   (count(url) - (select (count (keyword) from ErrorLog where url is like '%keyword%'))/count(url) as successRatio From accessLog group by url Could this be done in a Splunk query? Thanks in advance.
Hi, We are going to deploy changes which will delete certain package from instance. We want to know whether this package is getting deleted after the changes goes through. We are capturing this data ... See more...
Hi, We are going to deploy changes which will delete certain package from instance. We want to know whether this package is getting deleted after the changes goes through. We are capturing this data in Splunk.  So Let's say we have package=abc We can find if package exist using following SPL:     index=osstats sourcetype=package "abc" | bin _time span=1d | multikv fields NAME | eval package_exist=if(like(NAME,"abc%"),1,0) | eval package_name=if(like(NAME,"abc%"),NAME,NULL) | stats count by _time host package_exist package_name     Following index is polling data hourly therefore if search for last 24 hours, it will report count=24, host=abc.com,package_exist=1, package_name=abc Now I have created lookup table from this for last 1 year worth of data.  What i want to know is, suppose I have host (doesn't have to be part of above query), I want to check if it had package earlier and now it is getting removed.  I am not sure how I can go above doing that. 
I have 3 panels with single values in a dashboard.  Example : Panel A : 15HRS Panel B : 3 HRS Panel C : 4 HRS I want average of all 3 in one panel say Panel D= avg(Panel A,B,C). Since searche... See more...
I have 3 panels with single values in a dashboard.  Example : Panel A : 15HRS Panel B : 3 HRS Panel C : 4 HRS I want average of all 3 in one panel say Panel D= avg(Panel A,B,C). Since searches in panel A,B & C are different and having different sourcetype/indexes so don't want to merge all the searches in one.
I used JS to enable the cache flag on the searches running on my dashboard using the idea here (https://community.splunk.com/t5/Dashboards-Visualizations/Can-I-cache-searches-in-Simple-XML-using-the-... See more...
I used JS to enable the cache flag on the searches running on my dashboard using the idea here (https://community.splunk.com/t5/Dashboards-Visualizations/Can-I-cache-searches-in-Simple-XML-using-the-method-available-in/m-p/206447). It worked for me when I tested it on a standalone Splunk Enterprise environment. However, when I tried it in a clustered environment it did not work, though I do see that the flag is set to true when I inspect the search's JS object through the browser's developer tools. The searches run after the flag is set (I use a token to control when it runs), so that isn't the issue. Does anyone know what the issue could be or where to debug the issue? ----- Notes: Cannot use saved searches or report acceleration Already using data model acceleration
When ever I run some query my search is getting auto cancelled. My search head is a single search heard  and searching on a indexer cluster. What could be the reason. We face this after the recent up... See more...
When ever I run some query my search is getting auto cancelled. My search head is a single search heard  and searching on a indexer cluster. What could be the reason. We face this after the recent upgrade to 7.3
Run a search in the metrics index over the last 15 minutes. Use the stats command to find the middle number, the average and the most frequent occurrence of the field called "UsedBytes". Use an 'as' ... See more...
Run a search in the metrics index over the last 15 minutes. Use the stats command to find the middle number, the average and the most frequent occurrence of the field called "UsedBytes". Use an 'as' clause to rename all your fields. Split by users. Add column totals, customize your totals label, and place it underneath the user column. Finally, ONLY evaluate UseBytes that are MORE THAN the value of 0.    My string didnt work  (index=metrics | stats mode(UsedBytes), avg(UsedBytes), median(UsedBytes) as Bytes I am new to Splunk and need some assitance. 
I have created a role and provided the access to indexes for a user. However the user can not search on the index. Other user can search. This user is in 3 other roles also. How can I troubleshoot. I... See more...
I have created a role and provided the access to indexes for a user. However the user can not search on the index. Other user can search. This user is in 3 other roles also. How can I troubleshoot. In which log I can check for access related log.
Hello, Looking for some assistance with the existing query rex max_match=0 field=_raw "IP BLOCK TYPE\",value=\"(?<IP_Block_Type>.*?)\s*(\w*+)\]"| eval IP_Block_Type= substr(IP_Block_Type, 1, len(I... See more...
Hello, Looking for some assistance with the existing query rex max_match=0 field=_raw "IP BLOCK TYPE\",value=\"(?<IP_Block_Type>.*?)\s*(\w*+)\]"| eval IP_Block_Type= substr(IP_Block_Type, 1, len(IP_Block_Type)-1)   This query gives us a column with outputs   Need assistance with pulling exact details in the column which will only have "OVERRIDE".  Thanks
Hello,   I am new to Splunk and was wondering how I would filter out (even report/alert) on Non-RFC Compliant traffic from our PAN logs. Any suggestions would be appreciated.   Thanks, David
Does anyone know how to log INFO and WARN log_level events to $SplunkHome\var\log\splunk\splunk-powershell.ps1.log or $SplunkHome\var\log\splunk\splunkd.log for debug purposes using the powershell in... See more...
Does anyone know how to log INFO and WARN log_level events to $SplunkHome\var\log\splunk\splunk-powershell.ps1.log or $SplunkHome\var\log\splunk\splunkd.log for debug purposes using the powershell input? Using the code below in the script terminates it:   Write-Error -Message 'INFO test'       10-16-2020 12:47:05.4859538-5 ERROR Executing script=& "$SplunkHome/etc/apps/foo/bin/foo.ps1" for stanza=foo failed with exception=INFO test   The script does run correctly. I would just like to have some debug options. Thanks. 
I created a lookup csv file and when I try to search it in lookups I dont see the file. Its not allowing me to create with same name as there's already one exist. Any help why I am not able to see ... See more...
I created a lookup csv file and when I try to search it in lookups I dont see the file. Its not allowing me to create with same name as there's already one exist. Any help why I am not able to see the file please.    
Hello everyone,  I am working right now to collect logs from F5 BIG-IP. I have a distributed Splunk Infrastructure: Heavy Forwarder, Indexer & Search Head. I installed the Splunk Add-on for F5 BIG-... See more...
Hello everyone,  I am working right now to collect logs from F5 BIG-IP. I have a distributed Splunk Infrastructure: Heavy Forwarder, Indexer & Search Head. I installed the Splunk Add-on for F5 BIG-IP in the Search Head and Heavy Forwarer instances as recommended in Splunk documentation here:  https://docs.splunk.com/Documentation/AddOns/released/F5BIGIP/Install  Then, i discovered that Splunk Add-on for F5 BIG-IP is not separating sourcetypes as expected !!!  Also, maybe the last version of the Add-on for F5 BIG-IP (4.0.1) doesn't work with the version 16.0.0 of my F5 firewall. I read that somewhere ... But i am not sure about it!  Anyone have an idea please? Or, when the Add-On will be updated to support it.  PS : I'am working with Splunk Entreprise v8.0.4