All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

  I run a stats command every hour to show a list of firewall rules that are getting hit in a particular way. My command works for the hourly run, but I can't get a report to keep a running total o... See more...
  I run a stats command every hour to show a list of firewall rules that are getting hit in a particular way. My command works for the hourly run, but I can't get a report to keep a running total of my firewall rule hit count. I've tried the following, but it's not working. Can anyone help here? index=rsyslog firewall-ABC [search index=rsyslog (IONET_allow_BLAH_in OR IONet_allow_BLAH_outbound) host=firewall_XYZ.nascom.nasa.gov | table source_address, destination_address, destination_port] NOT (policy_id=1 OR policy_id=2)| sistats count by policy_id, source_address, destination_address | summaryindex spool=t uselb=t addtime=t index="summary" file="RMD5eef7b35350423340_1029407874.stash_new" name="Delegation_Fails" marker=""   Thanks,   Paul  
On HF we have routing summaries in transforms.conf which are take more time and creating a bottleneck for us We have below number of routing summaries ~2000 entries for index routing ~200 entries ... See more...
On HF we have routing summaries in transforms.conf which are take more time and creating a bottleneck for us We have below number of routing summaries ~2000 entries for index routing ~200 entries for sourcetype routing Can you please provide suggestions to route the events  faster and efficiently. Sample from transform.conf. [route_sentinel_to_index] INGEST_EVAL = index:=case(\ match(_raw, "\"TENANT\":\"xxxxxx-b589-c11a968d4876\""), "nacoak_mil", \ . . .<1997 entries> . . match(_raw, "\"EVENT_TIME\":\"\d{13}\""), "unknown_events", \ true(), "unknownsentinel") [apply_sourcetype_to_sentinel] INGEST_EVAL = sourcetype:=case(\ match(_raw, "\"SYSTEM\":\"xxxx-b3a7-xxxxxx\""), "cs:fhir:prod:audit:json", \ match(_raw, "\"SYSTEM\":\"xxxxxxx-d424c20xxxx\""), "cs:railsapp_server:ambulatory:audit:json", \ . .<198 entries> . true(), "cs:sentinel:unknown:audit:json")
Hi team,   Need help if anyone faced issues on phantom playbooks not processing events after upgrade from 5.0.1v to 5.1.0v. Thanks, Sharada
I have some sources that are coming in as json, and I am experiencing odd behavior where I cannot search on a particular field, but I can only find the value when doing a search against the _raw data... See more...
I have some sources that are coming in as json, and I am experiencing odd behavior where I cannot search on a particular field, but I can only find the value when doing a search against the _raw data. So for example, I have a field let's say "cluster", and I see it is also extracted just fine in the "Interesting fields" on the lefthand side.  One of the values we'll say is "cluster-name-A".   If I search in the query bar for:     cluster="cluster-name-A" sourcetype=mysourcetype index=myindex     I get no results, however if I just do a blanket search:     cluster-name-A sourcetype=mysourcetype index=myindex     My expected results come back fine. What can I investigate here to see why it will not let me use the fieldname in our searches?
Hello,  when i table the results the results are not matching exact with the next columns. what can i add to reslove this issue. Please find the below screenshot for the results. |rex field=_raw "(... See more...
Hello,  when i table the results the results are not matching exact with the next columns. what can i add to reslove this issue. Please find the below screenshot for the results. |rex field=_raw "(TEST_DETAIL_MESSAGE\s\=)(?<MESSAGE>\w+\D+\,)" max_match=0 |rex field=_raw "(TEST_COUNT\s\=)(?<COUNT>\s\d+)" max_match=0 | table MESSAGE COUNT    
Hello, I am trying to write a search query to fetch data from different sourcetype and the common factor in all sorucetype is _time. I'm facing two issues. 1. With below search criteria, the va... See more...
Hello, I am trying to write a search query to fetch data from different sourcetype and the common factor in all sorucetype is _time. I'm facing two issues. 1. With below search criteria, the value of field CPU is constant all the time, but the actual value is different. index=indexname host=hostname sourcetype=meminfo earliest=-1d@d latest=@d | table memUsedPct | join type=inner _time [search index=indexname host=hostname sourcetype=cpuinfo | multikv | search CPU=all | eval CPU=100-pctIdle | table CPU]   2. How to show the memUsedPct and CPU in a timechart ? Regards, Karthikeyan
When using Splunk Observability with the Boutique EKS website, I set up a graph to show data from metric 'spans.duration.ns.p90', sf_service 'checkoutservice', and sf_operation '/grpc.health.v1.... See more...
When using Splunk Observability with the Boutique EKS website, I set up a graph to show data from metric 'spans.duration.ns.p90', sf_service 'checkoutservice', and sf_operation '/grpc.health.v1.Health/Check'.  With the time range set at 1 hour, I can observe a particular peak value at 2.2 million.  If I change the time range to 2 hours, this same peak value becomes 4.4 million.  Why is this data changing?
For those of you who have both indexer cluster and search head cluster, I assume you have both "deployment server" which is deploy server for indexer and "deployer" which is deploy server for search ... See more...
For those of you who have both indexer cluster and search head cluster, I assume you have both "deployment server" which is deploy server for indexer and "deployer" which is deploy server for search head cluster. What types of apps do you deploy via each of these two? What is the best practice?
query 1 |mstats count(_value) as count1 WHERE metric_name="*metric1*" AND metric_type=c AND status="success" by metric_name,env,status| where count1>0 query 2 |mstats count(_value) as count2 WHE... See more...
query 1 |mstats count(_value) as count1 WHERE metric_name="*metric1*" AND metric_type=c AND status="success" by metric_name,env,status| where count1>0 query 2 |mstats count(_value) as count2 WHERE metric_name="*metric2*" AND metric_type=c AND status="success" by metric_name,env,status| where count2=0 These queries are working fine individually I need combine show results only if  count1>0 and count2=0      
Hi, I would like to create a histogram visualization. I would like to plot service call response times between 0-15 seconds for each svcName within my search. Please help with tips/advice for creat... See more...
Hi, I would like to create a histogram visualization. I would like to plot service call response times between 0-15 seconds for each svcName within my search. Please help with tips/advice for creating this. Thank you!
Hi Team, I would like to know how to differentiate business transactions that are captured by Appdynamics and the custom business transactions. In our application there are around 115 business tran... See more...
Hi Team, I would like to know how to differentiate business transactions that are captured by Appdynamics and the custom business transactions. In our application there are around 115 business transactions under Business Transaction section.  Customer wants to which are custom Business transactions. Kindly help me Thanks&Regards Srinivas
Hi Community, I have a data as follows -  Customer Error Code Result Abc 1111 2 Abc 1111 3 Abc 1222 4 ... See more...
Hi Community, I have a data as follows -  Customer Error Code Result Abc 1111 2 Abc 1111 3 Abc 1222 4 Abc Total 4 Abc Total 5     I want to combine the Total into single row Total showing the result column as - Total : 9. My code now -  | stats count as Result by Customer, ErrorCode | eval PercentOfTotal=100 | append [search index=sourcetype= abc: source= */ABC/* ErrorCode!=0 | stats count as Result by Customer | eval ErrorCode="Total", PercentOfTotal=100] | lookup xyz ErrorCode OUTPUT Description | lookup pqr Customer OUTPUT Customer_Name | eval Customer_Name=coalesce(Customer_Name,Customer) | eval Error=if(ErrorCode!="Total", ErrorCode+" ("+coalesce(Description,"Description Missing - Update xyz")+")", ErrorCode) | fields CustomerName, Error, Result
Hi, I am trying to use the HEC and whenever I run the test cURL examples, I am getting a Connection reset by peer   curl -u "x:7be6128d-61b5-493d-97f9-95b2b66251d2" "http://127.0.0.1:8088/servi... See more...
Hi, I am trying to use the HEC and whenever I run the test cURL examples, I am getting a Connection reset by peer   curl -u "x:7be6128d-61b5-493d-97f9-95b2b66251d2" "http://127.0.0.1:8088/services/collector/event"  -d '{"sourcetype": "mysourcetype", "event": "Hello, world!"}'   curl: (56) Recv failure: Connection reset by peer   Can someone advise me?   Many thanks, Keir
Hi ALL, If you are using the "Technical Addon for the Metricator application for Nmon" to monitor your indexers in an index cluster, the TA does not work after updating to Splunk 9.0. This is due t... See more...
Hi ALL, If you are using the "Technical Addon for the Metricator application for Nmon" to monitor your indexers in an index cluster, the TA does not work after updating to Splunk 9.0. This is due to the name change in the Splunk path. The fix is to update all the scripts in the TAs bin folder. # Discover TA-metricator-for-nmon path "$SPLUNK_HOME/etc/slave-apps/TA-metricator-for-nmon" TO "$SPLUNK_HOME/etc/peer-apps/TA-metricator-for-nmon"  
What benefits would it be to disable replication bundle to indexers What is the downside to disable disable replication bundle to indexers When searching index=_internal we see alot of warnings... See more...
What benefits would it be to disable replication bundle to indexers What is the downside to disable disable replication bundle to indexers When searching index=_internal we see alot of warnings about replication took to long time. And several times a day the event  "timeout" from reading from a peer.   We receive a lot of these events, and no customers are reporting issues regarding lookups failed or failed searches. We are not using cascading replication bundle, but maybe this would stop the error events.  Since it will not try again before every peers`s got its copy.    Thanks in advance, Kai
I used the Splunk Enterprise Trail version to test out the APIs When I tried to get a particular saved search by entering its name I can get the correct result.  But when I tried to get all the sav... See more...
I used the Splunk Enterprise Trail version to test out the APIs When I tried to get a particular saved search by entering its name I can get the correct result.  But when I tried to get all the saved searches in the instance. using https://localhost:8089/servicesNS/<user>/-/saved/searches/   I could not get the records. Can you please help me?      
Each Event contains 1-many Transaction Names with associated metrics as per the below example: 2022-08-03T08:47:49.4554569Z TransNames: DavidTrans_2 DavidTrans_1 Total DavidTrans_3 2022-08-03T08:4... See more...
Each Event contains 1-many Transaction Names with associated metrics as per the below example: 2022-08-03T08:47:49.4554569Z TransNames: DavidTrans_2 DavidTrans_1 Total DavidTrans_3 2022-08-03T08:47:49.4633642Z Name: DavidTrans_2 2022-08-03T08:47:49.4995979Z DavidTrans_2 - TransactionsPerSec: 0.92 2022-08-03T08:47:49.5180222Z Name: DavidTrans_1 2022-08-03T08:47:49.5245825Z DavidTrans_1 - TransactionsPerSec: 0.96 2022-08-03T08:47:49.5339575Z Name: DavidTrans_3 2022-08-03T08:47:49.5405933Z DavidTrans_3 - TransactionsPerSec: 0.97 From this Event I want to create a multi-series Timechart, where X is _Time, (Event time), Y is the value for Transactions Per Second and where each graph line represents one of the Transactions (DavidTrans_1, DavidTrans_2, DavidTrans_3. This is just an example and the Transaction name and number of transactions will differ. I have tried many different ways of doing this, but will paste what I just tried, which does not display any metrics and where the format is completely screwed up. SEARCH XXXXXXXXX | rex field=_raw max_match=0 " (?<TRANSNAME>.*) - TransactionsPerSec: (?<TRANSPERSEC>.*).*" | timechart list(TRANSNAME),list(TRANSPERSEC) by TRANSNAME I can add that using the stats command I can at least get the values in a nice table: SEARCH XXXXXXXXX | rex field=_raw max_match=0 " (?<TRANSNAME>.*) - TransactionsPerSec: (?<TRANSPERSEC>.*).*" | stats list(TRANSNAME),list(TRANSPERSEC)   I feel I have done similar things before but for some reason getting the values displayed in a Timechart was tricky this time.
 Transaction_Log__c: {"message":"Entering doPost method","level":"INFO","loggerName":"StoreManagementAPI","timestamp":"2022-08-03T11:45:25.880Z"} {"message":"Source System is : null","level":"DEBUG",... See more...
 Transaction_Log__c: {"message":"Entering doPost method","level":"INFO","loggerName":"StoreManagementAPI","timestamp":"2022-08-03T11:45:25.880Z"} {"message":"Source System is : null","level":"DEBUG","loggerName":"StoreManagementAPI","timestamp":"2022-08-03T11:45:25.886Z"} {"message":"Request Body : {\"data\":{\"dealerCode\":\"FARC\",\"dealerType\":\"Premise\"}}","level":"DEBUG","loggerName":"StoreManagementAPI","timestamp":"2022-08-03T11:45:25.888Z"} {"message":"Request Type/Parameters are : {}","level":"DEBUG","loggerName":"StoreManagementAPI","timestamp":"2022-08-03T11:45:25.889Z"} {"message":"Deserializing the reqBody","level":"INFO","loggerName":"StoreManagementAPI","timestamp":"2022-08-03T11:45:25.890Z"} {"message":"Entering getSuccessResponse method and parameter are -->TLS_Store__c:{Id=a7O5L000000000zUAA, Name=TELSTRA SHOP BONDI JUNCTION, TLS_DeliveryAddress__c=a2r5L000000YybMQAS, TLS_CompanyName__c=TRS SHOPS, TLS_PremiseCode__c=FARC, TLS_DealerChannel__c=TSN, TLS_DealerStatus__c=Active, TLS_HROrgUnitCode__c=90000561, TLS_DealerABN__c=33051775556, TLS_DealerACN__c=51775556, TLS_DealerEmail__c=bondijunction@team.telstra.com, TLS_DealerPhone__c=1800 723 917, TLS_DealerType__c=Premise, TLS_DealerParent__c=a7O5L000000075BUAQ, TLS_PhysicalAddress__c=a2r5L000000YybMQAS}","level":"INFO","loggerName":"StoreManagementAPI_ResponseMessages","timestamp":"2022-08-03T11:45:25.928Z"} {"message":"Exiting getSuccessResponse method","level":"INFO","loggerName":"StoreManagementAPI_ResponseMessages","timestamp":"2022-08-03T11:45:25.930Z"} {"message":"Exiting Post method","level":"INFO","loggerName":"StoreManagementAPI","timestamp":"2022-08-03T11:45:25.931Z"}
Hi, I am trying to visualize total execution time for each process. Is there a way to display the results of each process in a single column, like in the example below?    
In an online example that lets you export a splunk result, I found the following code.   <a class="btn btn-primary" role="button" href="/api/search/jobs/$export_sid$/results?isDownload=true&amp;t... See more...
In an online example that lets you export a splunk result, I found the following code.   <a class="btn btn-primary" role="button" href="/api/search/jobs/$export_sid$/results?isDownload=true&amp;timeFormat=%25FT%25T.%25Q%25%3Az&amp;maxLines=0&amp;count=0&amp;filename=$filename_token$&amp;outputMode=csv">Download CSV</a>  This does almost exactly what I want, so I tried to find more information of what is happening. I see some parameters there and I want to understand them. isDownload=true&amp; timeFormat=%25FT%25T.%25Q%25%3Az&amp; maxLines=0&amp;count=0&amp; filename=$filename_token$&amp; outputMode=csv I think the fields are almost self explaining but I would like to read the official documentation, also I would like the know what other possible parameters I can provide. When looking for the documentation I only found : Search endpoint descriptions - Splunk Documentation But this does not describe the parameters passed in the example. Where can i find an explenation of the parameters used?