All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am new to splunk, I have no idea, and I am asking for your help, this is my question: Can we force a query to launch first? it would be launching the query: |rest /servicesXY/-/-/saved/sea... See more...
Hello, I am new to splunk, I have no idea, and I am asking for your help, this is my question: Can we force a query to launch first? it would be launching the query: |rest /servicesXY/-/-/saved/searches timeout=0 before the rest. I thank you very much for your time and help.
Hi, After installing the appdynamics agent API in MAUI on Visual Studio 2022 Preview the project does not compile but in xamarin.forms it does. The error it gives me is this: Error AMM0000 Attribu... See more...
Hi, After installing the appdynamics agent API in MAUI on Visual Studio 2022 Preview the project does not compile but in xamarin.forms it does. The error it gives me is this: Error AMM0000 Attribute application@appComponentFactory value=(androidx.core.app.CoreComponentFactory) from AndroidManifest.xml:24:18-86 is also present at AndroidManifest.xml:22:18-91 value=(android.support.v4.app.CoreComponentFactory).Suggestion:add'tools:replace="android:appComponentFactory"' to element at AndroidManifest.xml:12:3-29:17 to override. I already tried to put tools:replace="android:appComponentFactory" in of the android manifest, also added xmlns:tools="schemas.android.com/tools" in for tools to work. But nothing worked, the same error appeared. Thanks.
Hi, I need to report on when a Notable alert was changed from the default "unassigned" status to " Acknowledged" status and from Acknowledged to "Resolved" along with the time difference it took betw... See more...
Hi, I need to report on when a Notable alert was changed from the default "unassigned" status to " Acknowledged" status and from Acknowledged to "Resolved" along with the time difference it took between each status.  Basically, we are trying to create a dashboard of all alerts whose SLA was missed. We have an SLA for 10 mins for a notable alert to be picked up,  meaning an analyst should change its default "unassigned" status to " Acknowledged" status.  Likewise,  there is SLA for 30 mins to further change from Acknowledged to Resolved.  Running the following query, Splunk shows the _time value for each alert when it was Acknowledged and when Resolved.   But it does NOT show when the alert was triggered/generated.   So that does not leave me with any starting point to compare against.     | `incident_review` | table _time rule_id rule_name owner reviewer status_label | where _time > relative_time(now(),"-1d@d") | eval Status_Time=strftime(_time,"%Y-%m-%d %H:%M:%S")     Output: _time rule_id rule_name owner reviewer status_label 07 July 2022 08:00:00 xxxxx AWS001_xx John John Acknowledged 07 July 2022 08:10:00 xxxxx AWS001_xx John John Resolved 07 July 2022 08:01:00 yyyyy AWS002_xx Jerry Jerry Acknowledged 1)  How can i compose a query to show me list of all alerts (rule_name) which were acknowledged more than 10 mins late and resolved more than 30 mins late ? I am assuming this will involve some eval  logic to calculate difference between acknowleged_time minus Triggered_time and checking if the difference is > 10 mins . If it is, then eval SLA_status = breached else SLA_Status= met .   Likewise for resolved_time as well. I am assuming a lot of you ES folks must be doing this kind of SLA metrics tracking some way or other.  Kindly assist. Thanks in advance
We have notable events for when a user is created on multiple devices. Most of them are expected for when devices are imaged.  I want to use erex to create a suppression for like accounts. They typ... See more...
We have notable events for when a user is created on multiple devices. Most of them are expected for when devices are imaged.  I want to use erex to create a suppression for like accounts. They typically have the same beginning and are followed by 2 numbers. Example would ituser23, ituser24, ituser25.  I am using the search below for testing index=notable source="Endpoint - Anomalous User Account Creation - Rule" | erex user examples="ituser23, ituser24, ituser25"  I am still getting user accounts that are unrelated such as phone or tablet. When I look at the recommended regex it seems like it is not being granular enough.
First it isn't clear to me what units the various timeseries in a metric are returning.  It feels pretty arbitrary to me.  I was wondering if perhaps the ns portion of a metric stood for nanoseconds?... See more...
First it isn't clear to me what units the various timeseries in a metric are returning.  It feels pretty arbitrary to me.  I was wondering if perhaps the ns portion of a metric stood for nanoseconds?  That would at least make it this more clear.  But I suppose it could also stand for namespace.
Can't I just search an IP within Splunk with no syntax, just 192.15.10.1 and if there is any data or this IP is simply being accessed by one of our users, then I should be able to see it. Are there... See more...
Can't I just search an IP within Splunk with no syntax, just 192.15.10.1 and if there is any data or this IP is simply being accessed by one of our users, then I should be able to see it. Are there better ways to find it?  Overall I want to see if two specific IPs are connecting to Splunk, if so, then broaden the search. 
Hello everyone, I have a field named SQL_NAME with values as per below (I'm writing two of them): #1(8):EMEMEB #2(14):8/3/2022 0:0:0 #3(13):Ememe Behe #4(3):409 #5(0): #1(6):TSUDE #2(14):8/1/202... See more...
Hello everyone, I have a field named SQL_NAME with values as per below (I'm writing two of them): #1(8):EMEMEB #2(14):8/3/2022 0:0:0 #3(13):Ememe Behe #4(3):409 #5(0): #1(6):TSUDE #2(14):8/1/2022 0:0:0 #3(10):Tugu Sude #4(3):411 #5(0): and I want to extract two fields named user and name with their values in the bold strings above using regular expression. Any idea? Thank you in advance.
Hello,  i have the chart created as area chart shown in the attached screen shot.   if you could see the tooltip which is being shown is the value of the column at that point of time. I would ... See more...
Hello,  i have the chart created as area chart shown in the attached screen shot.   if you could see the tooltip which is being shown is the value of the column at that point of time. I would like to edit this tool tip and add some more info to the tooltip which should be taken from the third column and i dont want to show the third column in the chart. Output from the query  _time Evictions Hits 2022-07-26 10:03:30.864 0 0 2022-07-26 10:03:32.021 0 0 2022-07-26 10:03:33.184 0 0 2022-07-26 10:03:34.460 12803 131779 2022-07-26 10:03:35.812 24627 251059 2022-07-26 10:03:37.330 40209 404141 2022-07-26 10:03:38.979 57844 576308   I have changed the query to have the output as  _time Evictions Hits Reads 2022-07-26 10:03:30.864 0 0 0 2022-07-26 10:03:32.021 0 0 0 2022-07-26 10:03:33.184 0 0 0 2022-07-26 10:03:34.460 12803 131779 131779 2022-07-26 10:03:35.812 24627 251059 251059 2022-07-26 10:03:37.330 40209 404141 404141 2022-07-26 10:03:38.979 57844 576308 576308 2022-07-26 10:03:40.523 73288 727097 727097 2022-07-26 10:03:41.851 87340 859045 859045   I want to have the reads column added in the tooltip to just show under the tooltip  
how to query, When quota/spike arrest is close to being exceeded e.g. 80% of configured quota as set by spike arrest. quota limit = 600.
Hi We've installed TA-Akamai_SIEM on both a HF and SH. The API connections appear to be coming in fine, we get JSON data and on the SH, I can see the Dashboards populated correctly. However, if I s... See more...
Hi We've installed TA-Akamai_SIEM on both a HF and SH. The API connections appear to be coming in fine, we get JSON data and on the SH, I can see the Dashboards populated correctly. However, if I search the relevant index, data is still appearing in JSON format.  Reading the notes for this app, the Scripting I believe should kick in and convert the JSON to CIM compliant format, but that doesnt seem to be happening. I do have (thousands of) errors appearing relating to Java, but it seems to be the same error that pops up on other people's problems and doesnt give much of an insight.  08-04-2022 12:18:09.203 +0100 INFO ExecProcessor [3239918 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg=Parse KVstore data...Complete 08-04-2022 12:18:09.229 +0100 INFO ExecProcessor [3239918 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg = streamEvents, end streamEvents 08-04-2022 12:18:09.229 +0100 ERROR ExecProcessor [3239918 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" javax.xml.stream.XMLStreamException: No element was found to write: java.lang.ArrayIndexOutOfBoundsException: -1 Splunk is running on 9.0.0 and Java on the HF appears to be OK, java -version returns    java version "1.8.0_333" Java(TM) SE Runtime Environment (build 1.8.0_333-b02) Java HotSpot(TM) 64-Bit Server VM (build 25.333-b02, mixed mode)   Has anybody seen any similar problems to the above?   Thanks
Hi Team, I need a help in preparing a availability calculator.   Below graph is the requirement. Current output form code below:  DESCRIPTION downtime Time QIT-LAG 00... See more...
Hi Team, I need a help in preparing a availability calculator.   Below graph is the requirement. Current output form code below:  DESCRIPTION downtime Time QIT-LAG 00:00:06 2022-07-31 QIT-LAG 00:00:09 2022-07-29 QIT-LAG 00:00:08 2022-07-29 QIT-LAG 00:00:10 2022-07-29   Current manual action:  1. Am extracting above table in excel, 2. converting all duration to seconds 3. grouing it with Day wise. 4. preparing a percentage loss out of 86400 (24*60*60) on each day is the graph. CODE:      index=opennms | search DESCRIPTION="QIT-LAG" | transaction nodelabel startswith=eval(Status="DOWN") endswith=eval(Status="UP") keepevicted=true | eval downtime=if(closed_txn=1,duration,null) | eval downtime=tostring(downtime, "duration") | fillnull value="" downtime | eval Status=if(closed_txn=1,"UP","DOWN") | rex field=downtime "(?P<downtime>[^.]+)" | rename _time as Time | fieldformat Time=strftime(Time,"%Y-%m-%d") | table DESCRIPTION, downtime, Time,       Challenge:  how to convert the current downtime into seconds and also add it with day basis and prpeare a percentage basis graph. Thanks In advance for guidance and help.       
Hi folks, I have a host that is sending different logs to Splunk, this host sends various logs successfully except for the syslog-ng logs. Here you have an example of the inputs config (there are... See more...
Hi folks, I have a host that is sending different logs to Splunk, this host sends various logs successfully except for the syslog-ng logs. Here you have an example of the inputs config (there are 3 inputs in this way not being received by Splunk) [monitor:///store/data/log/cisco_ise] disabled = false host = xxxxxxxxxx index = syslog sourcetype = cisco:ise Inputs appear when using the command 'splunk list monitor', then it doesn't seem a permissions issue. Other logs are being successfully ingested by this host. the syslog-ng is working as expected and it is receiving and storing logs on the hdd Does anyone has an idea of steps I can follow to troubleshoot this? Thanks in advance,
I have found these two endpoints related to saved searches https://<host>:<mPort>/services/saved/searches This provides the list of all the saved services in the instance https://<host>:<... See more...
I have found these two endpoints related to saved searches https://<host>:<mPort>/services/saved/searches This provides the list of all the saved services in the instance https://<host>:<mPort>/services/saved/searches/{name} This provides the search configurations and the SPL query used to create the particular saved search.  I would like to know if there is any particular API endpoint to get the data within the created saved search. 
we have separate data with respect to "DATE" listed as shown in the below table, we need to create a separate graph for each date with respect to M1,M2 etc values   by trellis is possible, but trell... See more...
we have separate data with respect to "DATE" listed as shown in the below table, we need to create a separate graph for each date with respect to M1,M2 etc values   by trellis is possible, but trellis option is not available to download in PDF hence we have to segregate according to date and create separate column graph for each date. NUMBER DATE M1 M2 M3 M4 M5 M6 10 31-07-2022 ******* ******* ******* ******* ******* ******* 10 31-07-2022 ******* ******* ******* ******* ******* ******* 10 31-07-2022 ******* ******* ******* ******* ******* ******* 10 31-07-2022 ******* ******* ******* ******* ******* ******* 10 31-07-2022 ******* ******* ******* ******* ******* ******* 10 24-07-2022 ******* ******* ******* ******* ******* ******* 10 24-07-2022 ******* ******* ******* ******* ******* ******* 10 24-07-2022 ******* ******* ******* ******* ******* ******* 10 24-07-2022 ******* ******* ******* ******* ******* ******* 10 24-07-2022 ******* ******* ******* ******* ******* ******* 10 17-07-2022 ******* ******* ******* ******* ******* ******* 10 17-07-2022 ******* ******* ******* ******* ******* ******* 10 17-07-2022 ******* ******* ******* ******* ******* ******* 10 17-07-2022 ******* ******* ******* ******* ******* ******* 10 17-07-2022 ******* ******* ******* ******* ******* *******
Hi Team I have a JSON file as below :- [{"entityId":null,"entityType":"Account.AccountRequest","accessedByUser":"jinghui@bullish.treasurygo.com","milestone":"Approval","comment":"Bank Account Manag... See more...
Hi Team I have a JSON file as below :- [{"entityId":null,"entityType":"Account.AccountRequest","accessedByUser":"jinghui@bullish.treasurygo.com","milestone":"Approval","comment":"Bank Account Manager approved this request. Comments: ","commentType":"MilestoneApproval","when":"2022-07-26T06:10:43.91Z","id":30},{"entityId":null,"entityType":"Account.AccountRequest","accessedByUser":"jinghui@bullish.treasurygo.com","milestone":"Approval","comment":"Bank Account Manager approved this request. Comments: ","commentType":"MilestoneApproval","when":"2022-07-26T06:10:43.91Z","id":30},{"entityId":null,"entityType":"Account.AccountRequest","accessedByUser":"jinghui@bullish.treasurygo.com","milestone":"A task was completed","comment":"Prepare SAP Config Docs","commentType":"MilestoneGeneric","when":"2022-07-26T06:10:43.907Z","id":29},{"entityId":null,"entityType":"Account.AccountRequest","accessedByUser":"jinghui@bullish.treasurygo.com","milestone":"A task was completed","comment":"Prepare SAP Config Docs","commentType":"MilestoneGeneric","when":"2022-07-26T06:10:43.907Z","id":29}] I am using the pattern while testing and reviewing the events :- (\[|,|\]){ This breaks everything fine but the last line which has the closing ]   How to get rid of the ] at the end of the JSON array? Kindly request you'll to guide me.Many thanks in anticipation.
I want to track multiple ORA numbers, we received different format logs as below, can you help me to write a query for this.   Logs/Events:   2022-08-04T06 : 55 : 54.009110 + 01 : 00 opiodr a... See more...
I want to track multiple ORA numbers, we received different format logs as below, can you help me to write a query for this.   Logs/Events:   2022-08-04T06 : 55 : 54.009110 + 01 : 00 opiodr aborting process unknown ospid ( 8696 ) as a result of ORA - 609 2022-08-04T06 : 51 : 54.137474 + 01 : 00 WARNING : inbound connection timed out ( ORA - 3136 )
Hi team, I wonder if someone can help me with the below query.  I have a to combine my two searches with join. With first search i get the assignement group and with second search i get email of th... See more...
Hi team, I wonder if someone can help me with the below query.  I have a to combine my two searches with join. With first search i get the assignement group and with second search i get email of those assigment group to send alert.  i have common values between two sourcetype but field name is different.  in the first serach, field is called dv_name and in second it is called name. Therefore i create name variable before using join. However my field email is still coming blank  serach: index=production sourcetype=call | eval name=dv_name | join name type=left [ index=production sourcetype=mail  earliest="04/30/2022:20:00:00" latest=now() | dedup name | stats values (dv_email) values (name) by name] | eval Email=if(isnull(dv_email), " ", dv_email)  | table dv_name Email
Hello, I'm starting to work on a new integration for Splunk Enterprise Security. Doc mention only devs with "entitlements" can test a ES integration, but I didn't found any other mention on how to g... See more...
Hello, I'm starting to work on a new integration for Splunk Enterprise Security. Doc mention only devs with "entitlements" can test a ES integration, but I didn't found any other mention on how to get them apart from that link. What is the process to obtain them? Thanks.
Hi,  how can I make a stacked column chart . Currently the Purple area displays how long it took for all processes combined to execute. How could I modify my spl query so that it would display how ... See more...
Hi,  how can I make a stacked column chart . Currently the Purple area displays how long it took for all processes combined to execute. How could I modify my spl query so that it would display how long each individual process took to complete in a column chart.    (A1, A2, A3 - process names)   | rex field=PROCESS_NAME ":(?<Process>[^\"]+)" | eval finish_time_epoch = strftime(strptime(FINISH_TIME, "%Y-%m-%d %H:%M:%S"),"%Y-%m-%d %H:%M:%S") | eval start_time_epoch = strftime(strptime(START_TIME, "%Y-%m-%d %H:%M:%S"),"%Y-%m-%d %H:%M:%S") | eval duration_s = strptime(FINISH_TIME, "%Y-%m-%d %H:%M:%S") - strptime(START_TIME, "%Y-%m-%d %H:%M:%S") | eval duration_min = round(duration_s / 60, 2) | chart sum(duration_min) as "time" by G_DT
I have data in json format like this.     "Task_no":"5", "Group": "G1", "EXECUTION_DATE":1648081994535, "STATUS":"FAILURE", "DURATION":1951628     I want to produce a table which has Grou... See more...
I have data in json format like this.     "Task_no":"5", "Group": "G1", "EXECUTION_DATE":1648081994535, "STATUS":"FAILURE", "DURATION":1951628     I want to produce a table which has Group Total_tasks SUCCESS FAILURE as fields. I tried the query like this.     index..... Group=G1| chart count(Task_No) by STATUS | eval Total_Tasks = SUCCESS + FAILURE | table Group Total_Tasks SUCCESS FAILURE     Its showing as no results found. But when i run the same query for all the group that is,   index..... | chart count(Task_No) by Group STATUS | eval Total_Tasks = SUCCESS + FAILURE | table Group Total_Tasks SUCCESS FAILURE   this query gives the required fields, but i want the table to be created for particular Group. Can anyone please help me to achieve this?