All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you, how would i be able to reduce the result by only displaying the row with the earliest time (time_1 field)? Thanks!
Ok, my SPL is getting smaller.. But here is what is happening with the current scenario im testing with. I have 3 jobs.  Two ran with no issues, each has a start & complete log event (No issues) Th... See more...
Ok, my SPL is getting smaller.. But here is what is happening with the current scenario im testing with. I have 3 jobs.  Two ran with no issues, each has a start & complete log event (No issues) The 3rd job has two start events and one complete event. Since it failed once, was restarted and then completed. When I use:   | Transaction  keeporphans=true host batchName startwith="start" endswith="completed" I end up with three transaction where closed_txn=1 and one where closed_txn is not defined or 0... for a total of 4 transactions. But, what I want to see in the resulting table is all jobs that have run to completion (closed_txn=1) and then any jobs that are currently running (closed_txn=0) or not defined. How would I go about eliminating from my results the 4th transaction when I have another transaction with the same jobName that has closed_txn=1 and then not remove it (since its still running) when I dont have a tranaction that has closed_txn=1 ? My Events: aJobName1 START aJobName2 START aJobName1 COMPLETE aJobName3 START aJobName2 START aJobName3 COMPLETE aJobName2 COMPLETE
Hello, Thank you for your help. Your answer is correct, the output literally put "FILTERED".     Sorry if my original post is not clear. I corrected my post. What I meant by "filtered" , comple... See more...
Hello, Thank you for your help. Your answer is correct, the output literally put "FILTERED".     Sorry if my original post is not clear. I corrected my post. What I meant by "filtered" , completely removed like shown below: no ip vuln score company 1 1.1.1.1 vuln1 9 company A 3 1.1.1.3 vuln3 9 company C 4 1.1.1.4     company D 5 1.1.1.5 vuln5 7 company E 7 1.1.1.7 vuln7 5 company G 8 1.1.1.8 vuln8 5 company H 9 1.1.1.9 vuln9       I think I figured it out index=testindex  (vuln=* AND score=* AND company=*) OR (vuln=*) OR NOT (company="") It's just weird that company=* does not work and I had to use NOT (company="") to filter out empty  NOT isnull(company) also doesn't work Please suggest. Thanks
Hi @michael_vi, Have you reviewed the documentation related to installing and using Unicode fonts for PDF generation? See <https://docs.splunk.com/Documentation/Splunk/9.1.1/Report/GeneratePDFsofyou... See more...
Hi @michael_vi, Have you reviewed the documentation related to installing and using Unicode fonts for PDF generation? See <https://docs.splunk.com/Documentation/Splunk/9.1.1/Report/GeneratePDFsofyourreportsanddashboards#Enable_usage_of_non-Latin_fonts_in_PDFs>.
Hi @Hami-g, Splunk Add-on for Cisco ASA provides the recommended knowledge objects for message 302014: | eval bytes_per_second=bytes/duration Specifically, the add-on includes a transform for fiel... See more...
Hi @Hami-g, Splunk Add-on for Cisco ASA provides the recommended knowledge objects for message 302014: | eval bytes_per_second=bytes/duration Specifically, the add-on includes a transform for field extractions and a field for duration: # transforms.conf [cisco_asa_message_id_302014_302016] REGEX = -30201[46]:\s*(\S+)\s+(\S+)\s+connection\s+(\d+)\s+for\s+([^:\s]+)\s*:\s*(?:((?:[\d+.]+|[a-fA-F0-9]*:[a-fA-F0-9]*:[a-fA-F0-9:]*))|(\S+))\s*\/\s*(\d{1,5})(?:\s*\(\s*(?:([\S^\\]+)\\)?([\w\-_@\.]+)\s*\))?\s+to\s+([^:\s]+)\s*:\s*(?:((?:[\d+.]+|[a-fA-F0-9]*:[a-fA-F0-9]*:[a-fA-F0-9:]*))|(\S+))\s*\/\s*(\d{1,5})(?:\s*\(\s*(?:([\S^\\]+)\\)?([\w\-_]+)\s*\))?\s+[Dd]uration:?\s*(?:(\d+)[dD])?\s*(\d+)[Hh]?\s*:\s*(\d+)[Mm]?\s*:\s*(\d+)[Ss]?\s+bytes\s+(\d+)\s*(?:(.+?(?=\s+from))\s+from\s+(\S+)|([^\(]+))?\s*(?:\(\s*([^\)\s]+)\s*\))? FORMAT = action::$1 transport::$2 session_id::$3 src_interface::$4 src_ip::$5 src_host::$6 src_port::$7 src_nt_domain::$8 src_user::$9 dest_interface::$10 dest_ip::$11 dest_host::$12 dest_port::$13 dest_nt_domain::$14 dest_user::$15 duration_day::$16 duration_hour::$17 duration_minute::$18 duration_second::$19 bytes::$20 reason::$21 teardown_initiator::$22 reason::$23 user::$24 # props.conf [cisco:asa] # ... EVAL-duration = ((coalesce(duration_day, 0))*24*60*60) + (duration_hour*60*60) + (duration_minute*60) + (duration_second)  
Hi @djoobbani, I find the simplest way to generate multiple events is a combination of makeresults, eval, and mvexpand: | makeresults | eval source="abc" | eval msg="consumed" | eval time_pairs=... See more...
Hi @djoobbani, I find the simplest way to generate multiple events is a combination of makeresults, eval, and mvexpand: | makeresults | eval source="abc" | eval msg="consumed" | eval time_pairs=split("2023-11-09T21:33:05Z,2023-11-09T21:40:05Z|2023-11-09T21:34:05Z,2023-11-09T21:41:05Z|2023-11-09T21:35:05Z,2023-11-09T21:42:05Z", "|") | mvexpand time_pairs | eval time_pairs=split(time_pairs, ",") | eval time_1=mvindex(time_pairs, 0), time_2=mvindex(time_pairs, 1) | fields - time_pairs  You can also use streamstats count combined with eval case: | makeresults count=3 | eval source="abc" | eval msg="consumed" | streamstats count | eval time_1=case(count==1, "2023-11-09T21:33:05Z", count==2, "2023-11-09T21:34:05Z", count==3, "2023-11-09T21:35:05Z") | eval time_2=case(count==1, "2023-11-09T21:40:05Z", count==2, "2023-11-09T21:41:05Z", count==3, "2023-11-09T21:42:05Z") | fields - count  These are just two examples. You can be as creative as needed.
I can see logs from Cisco ASA firewall to Splunk and we are getting logs when a connection close. It have the total data send with bytes.    Nov 1 12:19:48 ASA-FW-01 : %ASA-6-302014: Teardown TCP c... See more...
I can see logs from Cisco ASA firewall to Splunk and we are getting logs when a connection close. It have the total data send with bytes.    Nov 1 12:19:48 ASA-FW-01 : %ASA-6-302014: Teardown TCP connection 4043630532 for INSIDE-339:192.168.42.10/37308 to OUTSIDE-340:192.168.36.26/8080 duration 0:00:00 bytes 6398 TCP FINs from INSIDE-VLAN339   I am unable to see bytes as a valid field.  I tried to create Extract New Fields for this.    ^(?:[^:\n]*:){8}\d+\s+(?P<BYTES>\w+\s+)   But when I use in the search it fails.    index=asa_* src_ip = "192.168.42.10" | rex field=_raw DATA=0 "^(?:[^:\n]*:){8}\d+\s+(?P<BYTES>\w+\s+)"     OBJECTIVE :  Calculate Server throughput for flows using Cisco ASA logs.   So view the network throughput for the flows using splunk. 
If your initial search includes only start and end events, you can forego transaction and use stats to gather simple status and duration information, assuming a job with only a start event has actual... See more...
If your initial search includes only start and end events, you can forego transaction and use stats to gather simple status and duration information, assuming a job with only a start event has actually failed and isn't currently in progress: sourcetype=sjringo_jobstatus | stats range(_time) as duration by host job_id jobrun_id | where duration>0 ``` remove jobs with duration=0, i.e. no completion event ```  
No subsearches (append, join, etc.) are required. Here's a set of example events very loosely based on Tidal Enterprise Scheduler, a scheduler I've used in the past: 2023-11-10 20:00:00 job_id=1 job... See more...
No subsearches (append, join, etc.) are required. Here's a set of example events very loosely based on Tidal Enterprise Scheduler, a scheduler I've used in the past: 2023-11-10 20:00:00 job_id=1 jobrun_id=1 jobrun_status=active 2023-11-10 20:01:00 job_id=1 jobrun_id=2 jobrun_status=active 2023-11-10 20:02:00 job_id=1 jobrun_id=2 jobrun_status=normal 2023-11-10 20:03:00 job_id=2 jobrun_id=3 jobrun_status=active 2023-11-10 20:04:00 job_id=2 jobrun_id=3 jobrun_status=normal 2023-11-10 20:05:00 job_id=3 jobrun_id=4 jobrun_status=active 2023-11-10 20:06:00 job_id=3 jobrun_id=5 jobrun_status=active 2023-11-10 20:07:00 job_id=3 jobrun_id=5 jobrun_status=normal 2023-11-10 20:08:00 job_id=4 jobrun_id=6 jobrun_status=active 2023-11-10 20:09:00 job_id=4 jobrun_id=6 jobrun_status=normal 2023-11-10 20:10:00 job_id=5 jobrun_id=7 jobrun_status=active 2023-11-10 20:11:00 job_id=5 jobrun_id=8 jobrun_status=active 2023-11-10 20:12:00 job_id=5 jobrun_id=8 jobrun_status=normal In this example, job_id is the job definition, jobrun_id is the job instance, and jobrun_status is the state (active = started/running and normal = completed successfully). We can use the transaction command to uniquely identify the state of each job instance by host, job_id, and jobrun_id: sourcetype=sjringo_jobstatus | transaction keepevicted=true host job_id jobrun_id startswith=jobrun_status=active endswith=jobrun_status=normal | table _time duration closed_txn _raw Job instances with both a start (open) and end (close) event will have closed_txn=1; job instances with only a start event--an evicted transaction--will have closed_txn=0: _time duration closed_txn _raw 2023-11-10 15:11:00 60 1 2023-11-10 20:11:00 job_id=5 jobrun_id=8 jobrun_status=active 2023-11-10 20:12:00 job_id=5 jobrun_id=8 jobrun_status=normal 2023-11-10 15:10:00 0 0 2023-11-10 20:10:00 job_id=5 jobrun_id=7 jobrun_status=active 2023-11-10 15:08:00 60 1 2023-11-10 20:08:00 job_id=4 jobrun_id=6 jobrun_status=active 2023-11-10 20:09:00 job_id=4 jobrun_id=6 jobrun_status=normal 2023-11-10 15:06:00 60 1 2023-11-10 20:06:00 job_id=3 jobrun_id=5 jobrun_status=active 2023-11-10 20:07:00 job_id=3 jobrun_id=5 jobrun_status=normal 2023-11-10 15:05:00 0 0 2023-11-10 20:05:00 job_id=3 jobrun_id=4 jobrun_status=active 2023-11-10 15:03:00 60 1 2023-11-10 20:03:00 job_id=2 jobrun_id=3 jobrun_status=active 2023-11-10 20:04:00 job_id=2 jobrun_id=3 jobrun_status=normal 2023-11-10 15:01:00 60 1 2023-11-10 20:01:00 job_id=1 jobrun_id=2 jobrun_status=active 2023-11-10 20:02:00 job_id=1 jobrun_id=2 jobrun_status=normal 2023-11-10 15:00:00 0 0 2023-11-10 20:00:00 job_id=1 jobrun_id=1 jobrun_status=active   You can remove evicted transactions from your output with the default option of keepevicted=false.
 Hi ITWhisperer | tstats latest(_time) as LatestEvent where index=waf_imperva by host 15 min time frame Host                      Count 10.30.168.10 1699663326   Why query below is not pro... See more...
 Hi ITWhisperer | tstats latest(_time) as LatestEvent where index=waf_imperva by host 15 min time frame Host                      Count 10.30.168.10 1699663326   Why query below is not providing this result? My humble request for a struggling engineer, May I have your whatsup? | tstats latest(_time) as LatestEvent where index=* by index, host | eval LatestLog=strftime(LatestEvent,"%a %m/%d/%Y %H:%M:%S") | eval duration = now() - LatestEvent | eval timediff = tostring(duration, "duration") | lookup HostTreshold host | where duration > threshold | rename host as "src_host", index as "idx" | fields - LatestEvent | search NOT (index="cim_modactions" OR index="risk" OR index="audit_summary" OR index="threat_activity" OR index="endpoint_summary" OR index="summary" OR index="main" OR index="notable" OR index="notable_summary" OR index="mandiant")
This seemed to work by adding  | sort -_time | head 1 i.e. index=anIndex sourcetype=aSourcetype aJobName AND "START of script" | sort -_time | head 1 | append [ index=anIndex sourcetype=aSourcetype... See more...
This seemed to work by adding  | sort -_time | head 1 i.e. index=anIndex sourcetype=aSourcetype aJobName AND "START of script" | sort -_time | head 1 | append [ index=anIndex sourcetype=aSourcetype aJobName AND "COMPLETED OK" ] The way I understood sort 0 is that it forces all event but in this scenario there should not be more than a handfull. I think I saw the limit without the 0 was 10,000? So, I think this worked but need to do some additional testing. For what im working on I have 3 jobs that im trying to track and each has a different jobName so I ended up writing the initial query as:  index=anIndex sourcetype=aSourcetype aJobName1 AND "START of script" | sort -_time | head 1 | append [ search index=anIndex sourcetype=aSourcetype aJobName1 AND "COMPLETED OK" ] | append [ search index=anIndex sourcetype=aSourcetype aJobName2 AND "START of script" | sort -_time | head 1 | append [ index=anIndex sourcetype=aSourcetype aJobName2 AND "COMPLETED OK" ]] | append [ search  index=anIndex sourcetype=aSourcetype aJobName3 AND "START of script" | sort -_time | head 1 ] | append [ index=anIndex sourcetype=aSourcetype aJobName3 AND "COMPLETED OK" ]] Is there a better way to write this besides using three appends ?  I cant use one append and then | head 3 for the Start query as one of the jobs multipe STARTS could kick out a START event from one of the other jobs depending upon when the jobs start...
The keepevicted=true kind of works for one of the scenarios. The way it works under normal conditions (no errors) there will be only one Start and one Complete event except when a job has started an... See more...
The keepevicted=true kind of works for one of the scenarios. The way it works under normal conditions (no errors) there will be only one Start and one Complete event except when a job has started and not ended due to it currently running... When there is an exception, the job was restarted and completed with no additional errors. I have from newest to oldest (Completed, Start, Start) or multiple starts depending upon how many errors caused it to abort. When currently running (Start only, or multiple starts) with no complete Then to add more to this. I am not looking for just one job but multiple, each wih its unique job name and creating the transaction using the job name and then there could be multiple jobs currently running... Instead of using Transaction im thinking I might need to do a join with the first query looking for events with the jobName and Started. The 2nd query looking for jobName and Completed, but then having multple starts for a jobName would cause problems on the join?
Hi there: I have the following makeresults query: | makeresults count=3 | eval source="abc" | eval msg="consumed" | eval time_1="2023-11-09T21:33:05Z" | eval time_2="2023-11-09T21:40:05Z" ... See more...
Hi there: I have the following makeresults query: | makeresults count=3 | eval source="abc" | eval msg="consumed" | eval time_1="2023-11-09T21:33:05Z" | eval time_2="2023-11-09T21:40:05Z" So i want to create three different events where the values for time_1 & time_2 are different for each event. How can i do that? Thanks!
| eval ip=if(isnull(vuln) AND isnull(score) AND isnull(company),"FILTERED",ip) | eval vuln=if(ip="FILTERED",ip,vuln) | eval score=if(ip="FILTERED",ip,score) | eval company=if(ip="FILTERED",ip,company)
Hello, How to filter all row if some fields are empty, but do not filter if one of the field has value?    I appreciate your help. Thank you I want to filter out row, if vuln, score and company ... See more...
Hello, How to filter all row if some fields are empty, but do not filter if one of the field has value?    I appreciate your help. Thank you I want to filter out row, if vuln, score and company fields are empty/NULL    (All 3 fields are empty: Row 2 and 6 in the table below) If vuln OR company fields have values(NOT EMPTY), do not filter  Row 4: vuln=Empty                            company=company D(NOT empty) Row 9: vuln=vuln9(NOT empty)    company=empty If I use the search below, it will filter out row with vuln OR company that are empty (Row 4 and Row 9) index=testindex  vuln=* AND score=* AND company=* Current data no ip vuln score company 1 1.1.1.1 vuln1 9 company A 2 1.1.1.2       3 1.1.1.3 vuln3 9 company C 4 1.1.1.4     company D 5 1.1.1.5 vuln5 7 company E 6 1.1.1.6       7 1.1.1.7 vuln7 5 company G 8 1.1.1.8 vuln8 5 company H 9 1.1.1.9 vuln9     10 1.1.1.10 vuln10 4 company J   Expected Result: ***NEED CORRECTION*** no ip vuln score company 1 1.1.1.1 vuln1 9 company A 2 FILTERED FILTERED FILTERED FILTERED 3 1.1.1.3 vuln3 9 company C 4 1.1.1.4     company D 5 1.1.1.5 vuln5 7 company E 6 FILTERED FILTERED FILTERED FILTERED 7 1.1.1.7 vuln7 5 company G 8 1.1.1.8 vuln8 5 company H 9 1.1.1.9 vuln9     10 1.1.1.10 vuln10 4 company J Sorry, This is what I mean by FILTERED no ip vuln score company 1 1.1.1.1 vuln1 9 company A 3 1.1.1.3 vuln3 9 company C 4 1.1.1.4     company D 5 1.1.1.5 vuln5 7 company E 7 1.1.1.7 vuln7 5 company G 8 1.1.1.8 vuln8 5 company H 9 1.1.1.9 vuln9    
Hi @sabari80, Performance will vary with the size of the result set, but you can use eventstats and where to remove outlier events based on percentile: | eventstats p90(pp_user_action_response) a... See more...
Hi @sabari80, Performance will vary with the size of the result set, but you can use eventstats and where to remove outlier events based on percentile: | eventstats p90(pp_user_action_response) as p90_pp_user_action_response | where pp_user_action_response<=p90_pp_user_action_response The placement of the commands depends on how you want to calculate the average response time. To calculate the average response time for all requests below or equal to the 90th percentile, try this (untested): index="dynatrace" sourcetype="dynatrace:usersession" | spath output=user_actions path="userActions{}.visuallyCompleteTime" | spath output=pp_user_action_application input=user_actions path=application | where pp_user_action_application="******" | spath output=user_action_name input=user_actions path=name | spath output=pp_user_action_response input=user_actions path=visuallyCompleteTime | eval proper_user_action=substr(user_action_name, 0, 150) ``` did you mean to extract the "proper" user action here? ``` | eventstats p90(pp_user_action_response) as p90_pp_user_action_response by proper_user_action | where pp_user_action_response<=p90_pp_user_action_response | stats count as total_calls avg(pp_user_action_response) as avg_pp_user_action_response values(p90_pp_user_action_response) as p90_pp_user_action_response by proper_user_action
| rex field=logs "\|(?<msg>.+)$" | stats sum(eval(case(msg=="**Starting**",1,msg=="Shutting down",-1))) as bad count(eval(case(msg=="**Starting**",1))) as starts | eval good=starts-bad
Hi @gbam, Splunk provides an eval function, json_array_to_mv, to convert JSON-like array values to multivalued field values. After conversion, you can use the lookup command just as you would for a... See more...
Hi @gbam, Splunk provides an eval function, json_array_to_mv, to convert JSON-like array values to multivalued field values. After conversion, you can use the lookup command just as you would for any other field: | makeresults | eval id="[\"123\", \"321\", \"456\"]" | eval id=json_array_to_mv(id, false()) | lookup gbam_lookup.csv id _time id x y 2023-11-10 16:14:53 123 321 456 Data Data Data3 Data2 Data2 Data3   Index 0 of multivalued field id corresponds to index 0 of multivalued fields x and y, index 1 corresponds to index 1, etc.
Which ip address? Did you find out if you have any events in that index? What timeframe did you search over?
Looking help to remove outliers (values greater than 90 percentile responses). For Ex:  Response Time  -------------------- 1 Second 2 Seconds 3 Seconds 4 Seconds 5 Seconds 6 Seconds ... See more...
Looking help to remove outliers (values greater than 90 percentile responses). For Ex:  Response Time  -------------------- 1 Second 2 Seconds 3 Seconds 4 Seconds 5 Seconds 6 Seconds 7 Seconds 8 Seconds 9 Seconds 10 Seconds 90 percentile for the above values is 9 Seconds. want to remove the outlier 10 Seconds and get the average response for remaining values. My expected Avg Response (after Removing the outlier) = 5 Seconds ==================================================== My Query is  index="dynatrace" sourcetype="dynatrace:usersession" | spath output=user_actions path="userActions{}" | stats count by user_actions | spath output=pp_user_action_application input=user_actions path=application | where pp_user_action_application="******" | spath output=User_Action_Name input=user_actions path=name | spath output=pp_user_action_response input=user_actions path=visuallyCompleteTime | eval User_Action_Name=substr(User_Action_Name,0,150) | eventstats avg(pp_user_action_response) AS "Avg_Response" by Proper_User_Action | stats count(pp_user_action_response) As "Total_Calls",perc90(pp_user_action_response) AS "Perc90_Response" by User_Action_Name Avg_Response | eval Perc90_Response=round(Perc90_Response,0)/1000 | eval Avg_Response=round(Avg_Response,0)/1000 | table Proper_User_Action,Total_Calls,Perc90_Response