All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi at all, a customer asked to me if it's possible to have an alias instead of the hostname in the Monitoring Console Dashboards. I know that's it's easy to do this in normal Splunk Searches, but i... See more...
Hi at all, a customer asked to me if it's possible to have an alias instead of the hostname in the Monitoring Console Dashboards. I know that's it's easy to do this in normal Splunk Searches, but in the Monitoring Console Dashboards (e.g. Summary or Overview or Instances)? Ciao. Giuseppe
Hi @LearningGuy , the solution to your "Expected Result" is the one hinted by @ITWhisperer . Instead you can have to the last table simply adding    (vuln=* OR company=*)   to you main search. ... See more...
Hi @LearningGuy , the solution to your "Expected Result" is the one hinted by @ITWhisperer . Instead you can have to the last table simply adding    (vuln=* OR company=*)   to you main search. Ciao. Giuseppe  
Hi @Hami-g , you regex isn't correct, please try this: ^(?:[^:\n]*:){8}\d+\s+bytes\s(?P<BYTES>\w+\s+) that you can test at https://regex101.com/r/BGPGr9/1 Ciao. Giuseppe
> I don't want to send data to remote storage and then bring it back onto the indexer for archiving locally. And: > When a bucket rolls from warm to frozen, cache manager will download the warm buc... See more...
> I don't want to send data to remote storage and then bring it back onto the indexer for archiving locally. And: > When a bucket rolls from warm to frozen, cache manager will download the warm bucket from the indexes prefix withing the S3 bucket to one of the indexers, splunk will then take path to the bucket and pass it to the cold to frozen script for archiving which places the archive in the S3 bucket under archives.  Can you elaborate a bit on this? At first you mention you don't want to upload to S3 and then download for archiving locally, but that appears how you solved the problem. I see that archives go back to S3, so that's not archiving locally in terms of where archives get stored, but it's archiving locally in terms of where it happens (as in, you still pay S3 egress fees, which I thought was the main reason for coming up with a workaround/solution). Wouldn't it be better to just leave them there? > When archiving is successful, cache manager will delete the local and remote copies of the warm bucket. Your data is eventually still on S3 and it would be evicted from cache for good (presumably no one needs to search it, which is why it's being archived), so what's the benefit? > Smartstore will roll the buckets to frozen by default unless you set frozen time to 0 which will leave all warm buckets in S3. I didn't want that as a long term solution I wonder why. I like the creative approach, but I'm curious about the non-technical value (cost, special use case, business rules, something else) you get in return for the possibly additional/unnecessary egress fees.
Can i run appdynamics PHP agent on Alpine Docker image ? 
Thank you, how would i be able to reduce the result by only displaying the row with the earliest time (time_1 field)? Thanks!
Ok, my SPL is getting smaller.. But here is what is happening with the current scenario im testing with. I have 3 jobs.  Two ran with no issues, each has a start & complete log event (No issues) Th... See more...
Ok, my SPL is getting smaller.. But here is what is happening with the current scenario im testing with. I have 3 jobs.  Two ran with no issues, each has a start & complete log event (No issues) The 3rd job has two start events and one complete event. Since it failed once, was restarted and then completed. When I use:   | Transaction  keeporphans=true host batchName startwith="start" endswith="completed" I end up with three transaction where closed_txn=1 and one where closed_txn is not defined or 0... for a total of 4 transactions. But, what I want to see in the resulting table is all jobs that have run to completion (closed_txn=1) and then any jobs that are currently running (closed_txn=0) or not defined. How would I go about eliminating from my results the 4th transaction when I have another transaction with the same jobName that has closed_txn=1 and then not remove it (since its still running) when I dont have a tranaction that has closed_txn=1 ? My Events: aJobName1 START aJobName2 START aJobName1 COMPLETE aJobName3 START aJobName2 START aJobName3 COMPLETE aJobName2 COMPLETE
Hello, Thank you for your help. Your answer is correct, the output literally put "FILTERED".     Sorry if my original post is not clear. I corrected my post. What I meant by "filtered" , comple... See more...
Hello, Thank you for your help. Your answer is correct, the output literally put "FILTERED".     Sorry if my original post is not clear. I corrected my post. What I meant by "filtered" , completely removed like shown below: no ip vuln score company 1 1.1.1.1 vuln1 9 company A 3 1.1.1.3 vuln3 9 company C 4 1.1.1.4     company D 5 1.1.1.5 vuln5 7 company E 7 1.1.1.7 vuln7 5 company G 8 1.1.1.8 vuln8 5 company H 9 1.1.1.9 vuln9       I think I figured it out index=testindex  (vuln=* AND score=* AND company=*) OR (vuln=*) OR NOT (company="") It's just weird that company=* does not work and I had to use NOT (company="") to filter out empty  NOT isnull(company) also doesn't work Please suggest. Thanks
Hi @michael_vi, Have you reviewed the documentation related to installing and using Unicode fonts for PDF generation? See <https://docs.splunk.com/Documentation/Splunk/9.1.1/Report/GeneratePDFsofyou... See more...
Hi @michael_vi, Have you reviewed the documentation related to installing and using Unicode fonts for PDF generation? See <https://docs.splunk.com/Documentation/Splunk/9.1.1/Report/GeneratePDFsofyourreportsanddashboards#Enable_usage_of_non-Latin_fonts_in_PDFs>.
Hi @Hami-g, Splunk Add-on for Cisco ASA provides the recommended knowledge objects for message 302014: | eval bytes_per_second=bytes/duration Specifically, the add-on includes a transform for fiel... See more...
Hi @Hami-g, Splunk Add-on for Cisco ASA provides the recommended knowledge objects for message 302014: | eval bytes_per_second=bytes/duration Specifically, the add-on includes a transform for field extractions and a field for duration: # transforms.conf [cisco_asa_message_id_302014_302016] REGEX = -30201[46]:\s*(\S+)\s+(\S+)\s+connection\s+(\d+)\s+for\s+([^:\s]+)\s*:\s*(?:((?:[\d+.]+|[a-fA-F0-9]*:[a-fA-F0-9]*:[a-fA-F0-9:]*))|(\S+))\s*\/\s*(\d{1,5})(?:\s*\(\s*(?:([\S^\\]+)\\)?([\w\-_@\.]+)\s*\))?\s+to\s+([^:\s]+)\s*:\s*(?:((?:[\d+.]+|[a-fA-F0-9]*:[a-fA-F0-9]*:[a-fA-F0-9:]*))|(\S+))\s*\/\s*(\d{1,5})(?:\s*\(\s*(?:([\S^\\]+)\\)?([\w\-_]+)\s*\))?\s+[Dd]uration:?\s*(?:(\d+)[dD])?\s*(\d+)[Hh]?\s*:\s*(\d+)[Mm]?\s*:\s*(\d+)[Ss]?\s+bytes\s+(\d+)\s*(?:(.+?(?=\s+from))\s+from\s+(\S+)|([^\(]+))?\s*(?:\(\s*([^\)\s]+)\s*\))? FORMAT = action::$1 transport::$2 session_id::$3 src_interface::$4 src_ip::$5 src_host::$6 src_port::$7 src_nt_domain::$8 src_user::$9 dest_interface::$10 dest_ip::$11 dest_host::$12 dest_port::$13 dest_nt_domain::$14 dest_user::$15 duration_day::$16 duration_hour::$17 duration_minute::$18 duration_second::$19 bytes::$20 reason::$21 teardown_initiator::$22 reason::$23 user::$24 # props.conf [cisco:asa] # ... EVAL-duration = ((coalesce(duration_day, 0))*24*60*60) + (duration_hour*60*60) + (duration_minute*60) + (duration_second)  
Hi @djoobbani, I find the simplest way to generate multiple events is a combination of makeresults, eval, and mvexpand: | makeresults | eval source="abc" | eval msg="consumed" | eval time_pairs=... See more...
Hi @djoobbani, I find the simplest way to generate multiple events is a combination of makeresults, eval, and mvexpand: | makeresults | eval source="abc" | eval msg="consumed" | eval time_pairs=split("2023-11-09T21:33:05Z,2023-11-09T21:40:05Z|2023-11-09T21:34:05Z,2023-11-09T21:41:05Z|2023-11-09T21:35:05Z,2023-11-09T21:42:05Z", "|") | mvexpand time_pairs | eval time_pairs=split(time_pairs, ",") | eval time_1=mvindex(time_pairs, 0), time_2=mvindex(time_pairs, 1) | fields - time_pairs  You can also use streamstats count combined with eval case: | makeresults count=3 | eval source="abc" | eval msg="consumed" | streamstats count | eval time_1=case(count==1, "2023-11-09T21:33:05Z", count==2, "2023-11-09T21:34:05Z", count==3, "2023-11-09T21:35:05Z") | eval time_2=case(count==1, "2023-11-09T21:40:05Z", count==2, "2023-11-09T21:41:05Z", count==3, "2023-11-09T21:42:05Z") | fields - count  These are just two examples. You can be as creative as needed.
I can see logs from Cisco ASA firewall to Splunk and we are getting logs when a connection close. It have the total data send with bytes.    Nov 1 12:19:48 ASA-FW-01 : %ASA-6-302014: Teardown TCP c... See more...
I can see logs from Cisco ASA firewall to Splunk and we are getting logs when a connection close. It have the total data send with bytes.    Nov 1 12:19:48 ASA-FW-01 : %ASA-6-302014: Teardown TCP connection 4043630532 for INSIDE-339:192.168.42.10/37308 to OUTSIDE-340:192.168.36.26/8080 duration 0:00:00 bytes 6398 TCP FINs from INSIDE-VLAN339   I am unable to see bytes as a valid field.  I tried to create Extract New Fields for this.    ^(?:[^:\n]*:){8}\d+\s+(?P<BYTES>\w+\s+)   But when I use in the search it fails.    index=asa_* src_ip = "192.168.42.10" | rex field=_raw DATA=0 "^(?:[^:\n]*:){8}\d+\s+(?P<BYTES>\w+\s+)"     OBJECTIVE :  Calculate Server throughput for flows using Cisco ASA logs.   So view the network throughput for the flows using splunk. 
If your initial search includes only start and end events, you can forego transaction and use stats to gather simple status and duration information, assuming a job with only a start event has actual... See more...
If your initial search includes only start and end events, you can forego transaction and use stats to gather simple status and duration information, assuming a job with only a start event has actually failed and isn't currently in progress: sourcetype=sjringo_jobstatus | stats range(_time) as duration by host job_id jobrun_id | where duration>0 ``` remove jobs with duration=0, i.e. no completion event ```  
No subsearches (append, join, etc.) are required. Here's a set of example events very loosely based on Tidal Enterprise Scheduler, a scheduler I've used in the past: 2023-11-10 20:00:00 job_id=1 job... See more...
No subsearches (append, join, etc.) are required. Here's a set of example events very loosely based on Tidal Enterprise Scheduler, a scheduler I've used in the past: 2023-11-10 20:00:00 job_id=1 jobrun_id=1 jobrun_status=active 2023-11-10 20:01:00 job_id=1 jobrun_id=2 jobrun_status=active 2023-11-10 20:02:00 job_id=1 jobrun_id=2 jobrun_status=normal 2023-11-10 20:03:00 job_id=2 jobrun_id=3 jobrun_status=active 2023-11-10 20:04:00 job_id=2 jobrun_id=3 jobrun_status=normal 2023-11-10 20:05:00 job_id=3 jobrun_id=4 jobrun_status=active 2023-11-10 20:06:00 job_id=3 jobrun_id=5 jobrun_status=active 2023-11-10 20:07:00 job_id=3 jobrun_id=5 jobrun_status=normal 2023-11-10 20:08:00 job_id=4 jobrun_id=6 jobrun_status=active 2023-11-10 20:09:00 job_id=4 jobrun_id=6 jobrun_status=normal 2023-11-10 20:10:00 job_id=5 jobrun_id=7 jobrun_status=active 2023-11-10 20:11:00 job_id=5 jobrun_id=8 jobrun_status=active 2023-11-10 20:12:00 job_id=5 jobrun_id=8 jobrun_status=normal In this example, job_id is the job definition, jobrun_id is the job instance, and jobrun_status is the state (active = started/running and normal = completed successfully). We can use the transaction command to uniquely identify the state of each job instance by host, job_id, and jobrun_id: sourcetype=sjringo_jobstatus | transaction keepevicted=true host job_id jobrun_id startswith=jobrun_status=active endswith=jobrun_status=normal | table _time duration closed_txn _raw Job instances with both a start (open) and end (close) event will have closed_txn=1; job instances with only a start event--an evicted transaction--will have closed_txn=0: _time duration closed_txn _raw 2023-11-10 15:11:00 60 1 2023-11-10 20:11:00 job_id=5 jobrun_id=8 jobrun_status=active 2023-11-10 20:12:00 job_id=5 jobrun_id=8 jobrun_status=normal 2023-11-10 15:10:00 0 0 2023-11-10 20:10:00 job_id=5 jobrun_id=7 jobrun_status=active 2023-11-10 15:08:00 60 1 2023-11-10 20:08:00 job_id=4 jobrun_id=6 jobrun_status=active 2023-11-10 20:09:00 job_id=4 jobrun_id=6 jobrun_status=normal 2023-11-10 15:06:00 60 1 2023-11-10 20:06:00 job_id=3 jobrun_id=5 jobrun_status=active 2023-11-10 20:07:00 job_id=3 jobrun_id=5 jobrun_status=normal 2023-11-10 15:05:00 0 0 2023-11-10 20:05:00 job_id=3 jobrun_id=4 jobrun_status=active 2023-11-10 15:03:00 60 1 2023-11-10 20:03:00 job_id=2 jobrun_id=3 jobrun_status=active 2023-11-10 20:04:00 job_id=2 jobrun_id=3 jobrun_status=normal 2023-11-10 15:01:00 60 1 2023-11-10 20:01:00 job_id=1 jobrun_id=2 jobrun_status=active 2023-11-10 20:02:00 job_id=1 jobrun_id=2 jobrun_status=normal 2023-11-10 15:00:00 0 0 2023-11-10 20:00:00 job_id=1 jobrun_id=1 jobrun_status=active   You can remove evicted transactions from your output with the default option of keepevicted=false.
 Hi ITWhisperer | tstats latest(_time) as LatestEvent where index=waf_imperva by host 15 min time frame Host                      Count 10.30.168.10 1699663326   Why query below is not pro... See more...
 Hi ITWhisperer | tstats latest(_time) as LatestEvent where index=waf_imperva by host 15 min time frame Host                      Count 10.30.168.10 1699663326   Why query below is not providing this result? My humble request for a struggling engineer, May I have your whatsup? | tstats latest(_time) as LatestEvent where index=* by index, host | eval LatestLog=strftime(LatestEvent,"%a %m/%d/%Y %H:%M:%S") | eval duration = now() - LatestEvent | eval timediff = tostring(duration, "duration") | lookup HostTreshold host | where duration > threshold | rename host as "src_host", index as "idx" | fields - LatestEvent | search NOT (index="cim_modactions" OR index="risk" OR index="audit_summary" OR index="threat_activity" OR index="endpoint_summary" OR index="summary" OR index="main" OR index="notable" OR index="notable_summary" OR index="mandiant")
This seemed to work by adding  | sort -_time | head 1 i.e. index=anIndex sourcetype=aSourcetype aJobName AND "START of script" | sort -_time | head 1 | append [ index=anIndex sourcetype=aSourcetype... See more...
This seemed to work by adding  | sort -_time | head 1 i.e. index=anIndex sourcetype=aSourcetype aJobName AND "START of script" | sort -_time | head 1 | append [ index=anIndex sourcetype=aSourcetype aJobName AND "COMPLETED OK" ] The way I understood sort 0 is that it forces all event but in this scenario there should not be more than a handfull. I think I saw the limit without the 0 was 10,000? So, I think this worked but need to do some additional testing. For what im working on I have 3 jobs that im trying to track and each has a different jobName so I ended up writing the initial query as:  index=anIndex sourcetype=aSourcetype aJobName1 AND "START of script" | sort -_time | head 1 | append [ search index=anIndex sourcetype=aSourcetype aJobName1 AND "COMPLETED OK" ] | append [ search index=anIndex sourcetype=aSourcetype aJobName2 AND "START of script" | sort -_time | head 1 | append [ index=anIndex sourcetype=aSourcetype aJobName2 AND "COMPLETED OK" ]] | append [ search  index=anIndex sourcetype=aSourcetype aJobName3 AND "START of script" | sort -_time | head 1 ] | append [ index=anIndex sourcetype=aSourcetype aJobName3 AND "COMPLETED OK" ]] Is there a better way to write this besides using three appends ?  I cant use one append and then | head 3 for the Start query as one of the jobs multipe STARTS could kick out a START event from one of the other jobs depending upon when the jobs start...
The keepevicted=true kind of works for one of the scenarios. The way it works under normal conditions (no errors) there will be only one Start and one Complete event except when a job has started an... See more...
The keepevicted=true kind of works for one of the scenarios. The way it works under normal conditions (no errors) there will be only one Start and one Complete event except when a job has started and not ended due to it currently running... When there is an exception, the job was restarted and completed with no additional errors. I have from newest to oldest (Completed, Start, Start) or multiple starts depending upon how many errors caused it to abort. When currently running (Start only, or multiple starts) with no complete Then to add more to this. I am not looking for just one job but multiple, each wih its unique job name and creating the transaction using the job name and then there could be multiple jobs currently running... Instead of using Transaction im thinking I might need to do a join with the first query looking for events with the jobName and Started. The 2nd query looking for jobName and Completed, but then having multple starts for a jobName would cause problems on the join?
Hi there: I have the following makeresults query: | makeresults count=3 | eval source="abc" | eval msg="consumed" | eval time_1="2023-11-09T21:33:05Z" | eval time_2="2023-11-09T21:40:05Z" ... See more...
Hi there: I have the following makeresults query: | makeresults count=3 | eval source="abc" | eval msg="consumed" | eval time_1="2023-11-09T21:33:05Z" | eval time_2="2023-11-09T21:40:05Z" So i want to create three different events where the values for time_1 & time_2 are different for each event. How can i do that? Thanks!
| eval ip=if(isnull(vuln) AND isnull(score) AND isnull(company),"FILTERED",ip) | eval vuln=if(ip="FILTERED",ip,vuln) | eval score=if(ip="FILTERED",ip,score) | eval company=if(ip="FILTERED",ip,company)
Hello, How to filter all row if some fields are empty, but do not filter if one of the field has value?    I appreciate your help. Thank you I want to filter out row, if vuln, score and company ... See more...
Hello, How to filter all row if some fields are empty, but do not filter if one of the field has value?    I appreciate your help. Thank you I want to filter out row, if vuln, score and company fields are empty/NULL    (All 3 fields are empty: Row 2 and 6 in the table below) If vuln OR company fields have values(NOT EMPTY), do not filter  Row 4: vuln=Empty                            company=company D(NOT empty) Row 9: vuln=vuln9(NOT empty)    company=empty If I use the search below, it will filter out row with vuln OR company that are empty (Row 4 and Row 9) index=testindex  vuln=* AND score=* AND company=* Current data no ip vuln score company 1 1.1.1.1 vuln1 9 company A 2 1.1.1.2       3 1.1.1.3 vuln3 9 company C 4 1.1.1.4     company D 5 1.1.1.5 vuln5 7 company E 6 1.1.1.6       7 1.1.1.7 vuln7 5 company G 8 1.1.1.8 vuln8 5 company H 9 1.1.1.9 vuln9     10 1.1.1.10 vuln10 4 company J   Expected Result: ***NEED CORRECTION*** no ip vuln score company 1 1.1.1.1 vuln1 9 company A 2 FILTERED FILTERED FILTERED FILTERED 3 1.1.1.3 vuln3 9 company C 4 1.1.1.4     company D 5 1.1.1.5 vuln5 7 company E 6 FILTERED FILTERED FILTERED FILTERED 7 1.1.1.7 vuln7 5 company G 8 1.1.1.8 vuln8 5 company H 9 1.1.1.9 vuln9     10 1.1.1.10 vuln10 4 company J Sorry, This is what I mean by FILTERED no ip vuln score company 1 1.1.1.1 vuln1 9 company A 3 1.1.1.3 vuln3 9 company C 4 1.1.1.4     company D 5 1.1.1.5 vuln5 7 company E 7 1.1.1.7 vuln7 5 company G 8 1.1.1.8 vuln8 5 company H 9 1.1.1.9 vuln9