All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

query 1 |mstats count(_value) as count1 WHERE metric_name="*metric1*" AND metric_type=c AND status="success" by metric_name,env,status| where count1>0 query 2 |mstats count(_value) as count2 WHE... See more...
query 1 |mstats count(_value) as count1 WHERE metric_name="*metric1*" AND metric_type=c AND status="success" by metric_name,env,status| where count1>0 query 2 |mstats count(_value) as count2 WHERE metric_name="*metric2*" AND metric_type=c AND status="success" by metric_name,env,status| where count2=0 These queries are working fine individually I need combine show results only if  count1>0 and count2=0      
Hi, I would like to create a histogram visualization. I would like to plot service call response times between 0-15 seconds for each svcName within my search. Please help with tips/advice for creat... See more...
Hi, I would like to create a histogram visualization. I would like to plot service call response times between 0-15 seconds for each svcName within my search. Please help with tips/advice for creating this. Thank you!
Hi Team, I would like to know how to differentiate business transactions that are captured by Appdynamics and the custom business transactions. In our application there are around 115 business tran... See more...
Hi Team, I would like to know how to differentiate business transactions that are captured by Appdynamics and the custom business transactions. In our application there are around 115 business transactions under Business Transaction section.  Customer wants to which are custom Business transactions. Kindly help me Thanks&Regards Srinivas
Hi Community, I have a data as follows -  Customer Error Code Result Abc 1111 2 Abc 1111 3 Abc 1222 4 ... See more...
Hi Community, I have a data as follows -  Customer Error Code Result Abc 1111 2 Abc 1111 3 Abc 1222 4 Abc Total 4 Abc Total 5     I want to combine the Total into single row Total showing the result column as - Total : 9. My code now -  | stats count as Result by Customer, ErrorCode | eval PercentOfTotal=100 | append [search index=sourcetype= abc: source= */ABC/* ErrorCode!=0 | stats count as Result by Customer | eval ErrorCode="Total", PercentOfTotal=100] | lookup xyz ErrorCode OUTPUT Description | lookup pqr Customer OUTPUT Customer_Name | eval Customer_Name=coalesce(Customer_Name,Customer) | eval Error=if(ErrorCode!="Total", ErrorCode+" ("+coalesce(Description,"Description Missing - Update xyz")+")", ErrorCode) | fields CustomerName, Error, Result
Hi, I am trying to use the HEC and whenever I run the test cURL examples, I am getting a Connection reset by peer   curl -u "x:7be6128d-61b5-493d-97f9-95b2b66251d2" "http://127.0.0.1:8088/servi... See more...
Hi, I am trying to use the HEC and whenever I run the test cURL examples, I am getting a Connection reset by peer   curl -u "x:7be6128d-61b5-493d-97f9-95b2b66251d2" "http://127.0.0.1:8088/services/collector/event"  -d '{"sourcetype": "mysourcetype", "event": "Hello, world!"}'   curl: (56) Recv failure: Connection reset by peer   Can someone advise me?   Many thanks, Keir
Hi ALL, If you are using the "Technical Addon for the Metricator application for Nmon" to monitor your indexers in an index cluster, the TA does not work after updating to Splunk 9.0. This is due t... See more...
Hi ALL, If you are using the "Technical Addon for the Metricator application for Nmon" to monitor your indexers in an index cluster, the TA does not work after updating to Splunk 9.0. This is due to the name change in the Splunk path. The fix is to update all the scripts in the TAs bin folder. # Discover TA-metricator-for-nmon path "$SPLUNK_HOME/etc/slave-apps/TA-metricator-for-nmon" TO "$SPLUNK_HOME/etc/peer-apps/TA-metricator-for-nmon"  
What benefits would it be to disable replication bundle to indexers What is the downside to disable disable replication bundle to indexers When searching index=_internal we see alot of warnings... See more...
What benefits would it be to disable replication bundle to indexers What is the downside to disable disable replication bundle to indexers When searching index=_internal we see alot of warnings about replication took to long time. And several times a day the event  "timeout" from reading from a peer.   We receive a lot of these events, and no customers are reporting issues regarding lookups failed or failed searches. We are not using cascading replication bundle, but maybe this would stop the error events.  Since it will not try again before every peers`s got its copy.    Thanks in advance, Kai
I used the Splunk Enterprise Trail version to test out the APIs When I tried to get a particular saved search by entering its name I can get the correct result.  But when I tried to get all the sav... See more...
I used the Splunk Enterprise Trail version to test out the APIs When I tried to get a particular saved search by entering its name I can get the correct result.  But when I tried to get all the saved searches in the instance. using https://localhost:8089/servicesNS/<user>/-/saved/searches/   I could not get the records. Can you please help me?      
Each Event contains 1-many Transaction Names with associated metrics as per the below example: 2022-08-03T08:47:49.4554569Z TransNames: DavidTrans_2 DavidTrans_1 Total DavidTrans_3 2022-08-03T08:4... See more...
Each Event contains 1-many Transaction Names with associated metrics as per the below example: 2022-08-03T08:47:49.4554569Z TransNames: DavidTrans_2 DavidTrans_1 Total DavidTrans_3 2022-08-03T08:47:49.4633642Z Name: DavidTrans_2 2022-08-03T08:47:49.4995979Z DavidTrans_2 - TransactionsPerSec: 0.92 2022-08-03T08:47:49.5180222Z Name: DavidTrans_1 2022-08-03T08:47:49.5245825Z DavidTrans_1 - TransactionsPerSec: 0.96 2022-08-03T08:47:49.5339575Z Name: DavidTrans_3 2022-08-03T08:47:49.5405933Z DavidTrans_3 - TransactionsPerSec: 0.97 From this Event I want to create a multi-series Timechart, where X is _Time, (Event time), Y is the value for Transactions Per Second and where each graph line represents one of the Transactions (DavidTrans_1, DavidTrans_2, DavidTrans_3. This is just an example and the Transaction name and number of transactions will differ. I have tried many different ways of doing this, but will paste what I just tried, which does not display any metrics and where the format is completely screwed up. SEARCH XXXXXXXXX | rex field=_raw max_match=0 " (?<TRANSNAME>.*) - TransactionsPerSec: (?<TRANSPERSEC>.*).*" | timechart list(TRANSNAME),list(TRANSPERSEC) by TRANSNAME I can add that using the stats command I can at least get the values in a nice table: SEARCH XXXXXXXXX | rex field=_raw max_match=0 " (?<TRANSNAME>.*) - TransactionsPerSec: (?<TRANSPERSEC>.*).*" | stats list(TRANSNAME),list(TRANSPERSEC)   I feel I have done similar things before but for some reason getting the values displayed in a Timechart was tricky this time.
 Transaction_Log__c: {"message":"Entering doPost method","level":"INFO","loggerName":"StoreManagementAPI","timestamp":"2022-08-03T11:45:25.880Z"} {"message":"Source System is : null","level":"DEBUG",... See more...
 Transaction_Log__c: {"message":"Entering doPost method","level":"INFO","loggerName":"StoreManagementAPI","timestamp":"2022-08-03T11:45:25.880Z"} {"message":"Source System is : null","level":"DEBUG","loggerName":"StoreManagementAPI","timestamp":"2022-08-03T11:45:25.886Z"} {"message":"Request Body : {\"data\":{\"dealerCode\":\"FARC\",\"dealerType\":\"Premise\"}}","level":"DEBUG","loggerName":"StoreManagementAPI","timestamp":"2022-08-03T11:45:25.888Z"} {"message":"Request Type/Parameters are : {}","level":"DEBUG","loggerName":"StoreManagementAPI","timestamp":"2022-08-03T11:45:25.889Z"} {"message":"Deserializing the reqBody","level":"INFO","loggerName":"StoreManagementAPI","timestamp":"2022-08-03T11:45:25.890Z"} {"message":"Entering getSuccessResponse method and parameter are -->TLS_Store__c:{Id=a7O5L000000000zUAA, Name=TELSTRA SHOP BONDI JUNCTION, TLS_DeliveryAddress__c=a2r5L000000YybMQAS, TLS_CompanyName__c=TRS SHOPS, TLS_PremiseCode__c=FARC, TLS_DealerChannel__c=TSN, TLS_DealerStatus__c=Active, TLS_HROrgUnitCode__c=90000561, TLS_DealerABN__c=33051775556, TLS_DealerACN__c=51775556, TLS_DealerEmail__c=bondijunction@team.telstra.com, TLS_DealerPhone__c=1800 723 917, TLS_DealerType__c=Premise, TLS_DealerParent__c=a7O5L000000075BUAQ, TLS_PhysicalAddress__c=a2r5L000000YybMQAS}","level":"INFO","loggerName":"StoreManagementAPI_ResponseMessages","timestamp":"2022-08-03T11:45:25.928Z"} {"message":"Exiting getSuccessResponse method","level":"INFO","loggerName":"StoreManagementAPI_ResponseMessages","timestamp":"2022-08-03T11:45:25.930Z"} {"message":"Exiting Post method","level":"INFO","loggerName":"StoreManagementAPI","timestamp":"2022-08-03T11:45:25.931Z"}
Hi, I am trying to visualize total execution time for each process. Is there a way to display the results of each process in a single column, like in the example below?    
In an online example that lets you export a splunk result, I found the following code.   <a class="btn btn-primary" role="button" href="/api/search/jobs/$export_sid$/results?isDownload=true&amp;t... See more...
In an online example that lets you export a splunk result, I found the following code.   <a class="btn btn-primary" role="button" href="/api/search/jobs/$export_sid$/results?isDownload=true&amp;timeFormat=%25FT%25T.%25Q%25%3Az&amp;maxLines=0&amp;count=0&amp;filename=$filename_token$&amp;outputMode=csv">Download CSV</a>  This does almost exactly what I want, so I tried to find more information of what is happening. I see some parameters there and I want to understand them. isDownload=true&amp; timeFormat=%25FT%25T.%25Q%25%3Az&amp; maxLines=0&amp;count=0&amp; filename=$filename_token$&amp; outputMode=csv I think the fields are almost self explaining but I would like to read the official documentation, also I would like the know what other possible parameters I can provide. When looking for the documentation I only found : Search endpoint descriptions - Splunk Documentation But this does not describe the parameters passed in the example. Where can i find an explenation of the parameters used? 
Hi every one, I want a report which showing only the maximum value (days_since) and show the condition base on the maximum value (Pending_since). I would be appreciated for your help. This is my ... See more...
Hi every one, I want a report which showing only the maximum value (days_since) and show the condition base on the maximum value (Pending_since). I would be appreciated for your help. This is my search  indix=............... ................... | eval days_since = floor((now() - _time) / 86400) | eval Pending_since = case(days_since == 0, "Today", days_since < 30, "Pending (< 30 days)", days_since > 45, "Pending ( > 45 days)", days_since > 30, "Pending ( 30>Days<45 )", days_since < 45, "Pending ( 30>Days<45 )", days_since > 1, days_since . " Days")  
Hello! I have just created a trial account to try Open Telemetry integration. When I go to the OTel tab to generate a key and press  , I simply get an error: The UI makes a request to "/c... See more...
Hello! I have just created a trial account to try Open Telemetry integration. When I go to the OTel tab to generate a key and press  , I simply get an error: The UI makes a request to "/controller/restui/otel/onBoardCustomer", wich returns an 500 error: { "displayText" : "Error occurred in on-boarding customer", "messageKey" : null, "localizationBundle" : null, "showWithNotificationService" : true, "rootExceptionClass" : "", "notErrorMessage" : false, "unauthorizedAccess" : false, "noDataFound" : false }  So does the trial account is not licensed for OTel?
I am trying to change the Inactive Account Activity Detected search, so the search reads, the time range of more than 365 days ago (Instead of less than 90 days ago) and greater than 2 hours ago.  Ev... See more...
I am trying to change the Inactive Account Activity Detected search, so the search reads, the time range of more than 365 days ago (Instead of less than 90 days ago) and greater than 2 hours ago.  Every time I add a great than symbol or change 90 days I get an error message in splunk Can anyone change this search so it reads that its looking for inactive accounts of over 365 days ago which have just been logged into today. | `inactive_account_usage("90","2")` | `ctime(lastTime)` | fields + user,tag,inactiveDays,lastTime  
We have a problem keeping the logs form AWS. The hostname is random. I can't specify the host.
How to resolve Unable to initialize modular input "taxii" defined in the app "SA-Splice": Introspecting scheme=taxii: script running failed (exited with code 1)..
I have 2 values  time received =161300 and time sent = 161259, and I want to get the time stamp difference which is 1. diff time received- time sent gives 41 sec which is incorrect. please help w... See more...
I have 2 values  time received =161300 and time sent = 161259, and I want to get the time stamp difference which is 1. diff time received- time sent gives 41 sec which is incorrect. please help with the correct query.  the data given above is in hhmmss format 
Currently when I want to catch errors coming from arbitrary action block I rely on phantom.get_summary() looking at action status. But I notice that Custom Function blocks cannot be found there, so m... See more...
Currently when I want to catch errors coming from arbitrary action block I rely on phantom.get_summary() looking at action status. But I notice that Custom Function blocks cannot be found there, so my question is how do I check the status of a Custom Function block, and catch errors originating there
Hello, I'm working on a use case where I have 1 source and 2 destinations. Everything that is found between the source and the 2 destinations need to be excluded. So I've used: where source = X AND... See more...
Hello, I'm working on a use case where I have 1 source and 2 destinations. Everything that is found between the source and the 2 destinations need to be excluded. So I've used: where source = X AND destination != Y OR destination != Z But this will filter the logs and will display only the logs that comes from source X and the logs that comes from other sources will be excluded as well. How I can exclude only from source X to destination Y and Z ?