All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I always struggle with this common task (common for me) -  I have a v8 UF setup on a windows10 machine,  it is logging all of the winEvent logs beautifully (back to my splunk v8 server),  however i n... See more...
I always struggle with this common task (common for me) -  I have a v8 UF setup on a windows10 machine,  it is logging all of the winEvent logs beautifully (back to my splunk v8 server),  however i need to monitor something specific on this machine.   (NB: i do NOT use deployment-server in anyway, anywhere) I need this windows UF to monitor all *.log files , recursively, within X Directory.  in this case, its : C:\ProgramData\vMix\    (any/all *.log files recursively) and C:\Users\pc\Documents\vMixStorage\logs    (any/all *.log files recursively) So i edit inputs.conf: notepad++.exe "C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf" and i add these stanzas, one at a time (and then test to see if data is getting to my splunk server):      [monitor://C:\Users\pc\Documents\vMixStorage\log\*] disabled = 0 index = pcs recursive = true sourcetype = vMIX [monitor://C:\ProgramData\vMix\...\*.log] disabled = 0 index = pcs blacklist = .*stream.*|stream.* whitelist = *.log recursive = true sourcetype = vMIX [monitor://C:\ProgramData\vMix\*.log] disabled = 0 index = pcs blacklist = .*stream.*|stream.* sourcetype = vMIX [monitor://C:\Users\pc\Documents\vMixStorage\...\*.log] disabled = 0 index = pcs recursive = true sourcetype = vMIX [monitor://C:\Users\pc\Documents\vMixStorage\logs\] disabled = 0 index = pcs blacklist = .*stream.* whitelist = *.log recursive = true sourcetype = vMIX      At some point in adding the above, one stanza at a time,  i did get the *.logs to flow in,  however they then stopped updating/ flowing in (but win event log is ofcourse still flowing in, rock solid). I get this output from  .\splunk.exe list monitor   which to me seems like its NOT what i want (as i *think* i should be seeing those directories under "Monitored Directories"  ,  but i have yet to be able to get that to occur.     PS C:\Program Files\SplunkUniversalForwarder\bin> .\splunk.exe list monitor Monitored Directories: [No directories monitored.] Monitored Files: C:\ProgramData\vMix\*.log C:\ProgramData\vMix\...\*.log C:\Users\pc\Documents\vMixStorage\...\*.log C:\Users\pc\Documents\vMixStorage\log\* C:\Users\pc\Documents\vMixStorage\logs\     btool debug:     .\splunk.exe cmd btool inputs list --debug ## <snip> ## C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf [monitor://C:\ProgramData\vMix\*.log] C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf _rcvbuf = 1572864 C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf blacklist = .*stream.*|stream.* C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf disabled = 0 C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf evt_dc_name = C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf evt_dns_name = C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf evt_resolve_ad_obj = 0 C:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf host = vMIX-JCv71-p1000 C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf index = pcs C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf sourcetype = vMIX C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf [monitor://C:\ProgramData\vMix\...\*.log] C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf _rcvbuf = 1572864 C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf blacklist = .*stream.*|stream.* C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf disabled = 0 C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf evt_dc_name = C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf evt_dns_name = C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf evt_resolve_ad_obj = 0 C:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf host = vMIX-JCv71-p1000 C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf index = pcs C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf recursive = true C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf sourcetype = vMIX C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf whitelist = *.log C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf [monitor://C:\Users\pc\Documents\vMixStorage\...\*.log] C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf _rcvbuf = 1572864 C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf disabled = 0 C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf evt_dc_name = C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf evt_dns_name = C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf evt_resolve_ad_obj = 0 C:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf host = vMIX-JCv71-p1000 C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf index = pcs C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf recursive = true C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf sourcetype = vMIX C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf [monitor://C:\Users\pc\Documents\vMixStorage\log\*] C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf _rcvbuf = 1572864 C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf disabled = 0 C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf evt_dc_name = C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf evt_dns_name = C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf evt_resolve_ad_obj = 0 C:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf host = vMIX-JCv71-p1000 C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf index = pcs C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf recursive = true C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf sourcetype = vMIX C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf [monitor://C:\Users\pc\Documents\vMixStorage\logs\] C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf _rcvbuf = 1572864 C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf blacklist = .*stream.* C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf disabled = 0 C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf evt_dc_name = C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf evt_dns_name = C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf evt_resolve_ad_obj = 0 C:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf host = vMIX-JCv71-p1000 C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf index = pcs C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf recursive = true C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf sourcetype = vMIX C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\local\inputs.conf whitelist = *.log C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\default\inputs.conf [monitor://C:\Windows\System32\DHCP] C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf _rcvbuf = 1572864 C:\Program Files\SplunkUniversalForwarder\etc\apps\FINAL_Splunk_TA_windowsLOCALip\default\inputs.conf crcSalt = <SOURCE> ## <snip> ##     Can anyone please help or point me to the correct Stanza i should be using here?  i really have spent hours searching and reading forum posts,  (which is how i arrived at the stanzas above) as i know this is a common task, however i know im still not doing this correctly. ( + its not working   )  -  thank you! (appologies for the poor spacing,  i have tried to re-edit but it does not seem to be saving my changes on edit->post)
Im trying to join the correct source hostname to my Event from where a RDP Connection was innitiated. Since the Event just provides the Source IP-Address, I want to join the hostname from my summary... See more...
Im trying to join the correct source hostname to my Event from where a RDP Connection was innitiated. Since the Event just provides the Source IP-Address, I want to join the hostname from my summary Index that has hostnames with the IP-Addresses which they have been assigned to over time (1m Bucket) Unfortunately its not working as expected. I build the search as following: <search string for RDP Logon Event> | bucket span=1m _time | join type=left [search index=<summary_index> | eval source_host = hostname | eval Source_Network_Address = IP | fields _time Source_Network_Address source_host] | table _time host source_host Source_Network_Address   Now what happens is, that the Source_Network_Addresses are getting matched, but it only returns the latest _time value from the summary_index by the matched Network Address for all, which ofc mostly results in a wrong hostname Why is it not also matching the _time value from the base search with the _time value from the subsearch? both _time fields are in timestamp format Thanks for helping me  
Hello,   I have install bonnie++  Ver 1.03e on Ubuntu 20.04.4, try to run Command bonnie++ , attached please fine the output screen shot.   May I know how to calculate or check the IOPS from ... See more...
Hello,   I have install bonnie++  Ver 1.03e on Ubuntu 20.04.4, try to run Command bonnie++ , attached please fine the output screen shot.   May I know how to calculate or check the IOPS from this bonnie++ output ? should it be just last column > Random > 313.2 /sec ?  thank you ! I heard that we should have least IOPS 800 for splunk and ideally 1200 + for Splunk.
I have HEC to send an event to Splunk in JSON format:     { Status: Down Source: GCP URL: url_1 } { Status: Up Source: GCP URL: url_2 } { Status: Down Source: AWS... See more...
I have HEC to send an event to Splunk in JSON format:     { Status: Down Source: GCP URL: url_1 } { Status: Up Source: GCP URL: url_2 } { Status: Down Source: AWS URL: url_1 } { Status: Up Source: AWS URL: url_2 }     I want to extract value from JSON then declare a variable, not sure should I use eval or stats For example: declare a variable usl_1_aws_status, it should be Down declare a variable usl_2_gcp_status, it should be UP How to do I extract value from JSON then declare a variable?
I have events like these (just some made-up data), that are pushed in JSON format to Splunk:       {"name":"abc", "grade":"third", "result": "PASS", "courses":["math","science","literature"],... See more...
I have events like these (just some made-up data), that are pushed in JSON format to Splunk:       {"name":"abc", "grade":"third", "result": "PASS", "courses":["math","science","literature"], "interests":["this","that"]}       Events are being generated all the time, and I need to get the latest values of "result", "courses" and "interests" for a given "name" and "grade". Note that "courses" and "interests" are lists/arrays, while other fields are strings. So I am doing somethings like:       index=whatever name=abc grade=third | stats latest(courses) as courses, latest(interests) as interests, latest(result) as result index=whatever name=abc grade=third | stats latest(courses{}) as courses, latest(interests{}) as interests, latest(result) as result index=whatever name=abc grade=third | eval courses=json_array_to_mv(courses), interests=json_array_to_mv(interests) | stats latest(courses) as courses, latest(interests) as interests, latest(result) as result         Also tried with "tstats" approach.   None of those work. I get the courses and interests as empty values. result comes in fine, because its a string.   How can I get the "latest" lists of courses and interests given other values?
I have a lookup file that has 5 columns.  Those are src_ip, dest_ip, dest_port, signature and active. src_ip has 18 values while the dest_ip has 50 values.  Signature is based on the dest_ip field,... See more...
I have a lookup file that has 5 columns.  Those are src_ip, dest_ip, dest_port, signature and active. src_ip has 18 values while the dest_ip has 50 values.  Signature is based on the dest_ip field, meaning 30 of the dest_ip we'll see a signature named "ssh login."  The other 20 sigs will be "ftp login."  sigs that are "ssh login" will always be dest_port=22 and sig "ftp login" will always be dest_port=21. The src_ip can hit any of the destinations / dest_ports / signatures. I've tried this in my search but it falls short of adding in the src_ip against all the dest_ip. | inputlookup exclusion_list.csv | fields src_ip dest_ip dest_port signature | format | table search The issue I'm seeing is once the search gets to a row in the lookup file that doesn't contain a src_ip it doesn't add on to the results.  So in essence I end up with 18 line that have: ( (dest_ip=xxxx AND dest_port=22 AND signature=xxx AND src_ip=yyyy) OR (dest_ip=xxxx AND dest_port=22 AND signature=xxx) ) I can't figure out how to make the command sedn the src_ip's to all the dest_ip / dest_port / signature combos. This is hard to write out what I want but hopefully there is some help out there.  Thanks in advance.
I'm trying to install a fresh install of Enterprise Security onto a search head cluster.  I uploaded the app via the GUI onto the shc deployer, but before I click start configuration process, I not... See more...
I'm trying to install a fresh install of Enterprise Security onto a search head cluster.  I uploaded the app via the GUI onto the shc deployer, but before I click start configuration process, I note the following message:  Single Search Head Deployment Splunk Enterprise Security is being configured on a single search head deployment. How do I get it to recognize it is a search head cluster deployer? 
Is there a way to make a timechart like this in splunk? I really don't need the number values on the y axis I mostly care about showing the status as good, fair or poor.   . 
Hello All, I have a really simple search, while it works, I'd like to do some operations on that data:     index=xxxx earliest=-2w@w0 latest=@w6@d+24h | timechart span=7d count(response_tim... See more...
Hello All, I have a really simple search, while it works, I'd like to do some operations on that data:     index=xxxx earliest=-2w@w0 latest=@w6@d+24h | timechart span=7d count(response_time)     Output is  2022-03-13                          3,xxx,xxx 2022-03-20                            3,xxx.xxx The deal is, I'd really like to have those seperate outputs as variables like Week1 and Week2. This way I could do some operations to see my sites volume week to week change so I can normalize error data. Hopefully this makes sense.
Hi All, after querying and grouping my data, my timestamp is of different format like 2021-01-20 07:22:34.545674 2020-02-18T11:03:44.543+0000 2021-01-25T11:05:33.003Z 2022-04-01 19:51:01.41... See more...
Hi All, after querying and grouping my data, my timestamp is of different format like 2021-01-20 07:22:34.545674 2020-02-18T11:03:44.543+0000 2021-01-25T11:05:33.003Z 2022-04-01 19:51:01.411826Z 2021-05-22 02:49:26.607839 How to have a uniform format for all the timestamp values in the stats table 
I found a close answer to what I'm looking for here: https://community.splunk.com/t5/Splunk-Search/Why-cant-i-supply-a-field-as-value-for-mvfilter/m-p/450564/highlight/true#M127583 The example, e... See more...
I found a close answer to what I'm looking for here: https://community.splunk.com/t5/Splunk-Search/Why-cant-i-supply-a-field-as-value-for-mvfilter/m-p/450564/highlight/true#M127583 The example, excludes 1 example, add \"a\" for more, which works: | makeresults | eval mymvfield ="a b c" | makemv mymvfield | eval excludes = mvfilter(NOT in(mymvfield, [| makeresults | eval search = "\"b\"" | return $search])) What I'm looking for, use return which seemingly translates to (b) OR (a) ... : | makeresults | eval mymvfield ="a b c" | makemv mymvfield | eval excludes = mvfilter(NOT in(mymvfield, [| search something | return 3 $some_field])) I get weird parsing errors which I thought maybe could be solved by using "format" but I'm at a loss. I reckon you could probably solve this by doing a subsearch and filtering prior to making the multivalue field, I'm however curious if you can make this query work. Please let me know if anything is unclear.
Hi,  I would like to monitor one value of each event. When it keeps increasing after 5 events, an alarm should be triggert.  I uase autoregress to generate the difference between the current event ... See more...
Hi,  I would like to monitor one value of each event. When it keeps increasing after 5 events, an alarm should be triggert.  I uase autoregress to generate the difference between the current event and previous event (see below). But how can I monitor that the difference keeps being positive after five events? Thank you very much! | autoregress C_avg as C_avg_prev | eval C_detal=C_avg-C_avg_prev  
Hi, please bear with me, I'm VERY new to Splunk. I've been googling trying to find the proper search, but I'm coming up empty.  We had someone make a change to an account in outlook. We need to kno... See more...
Hi, please bear with me, I'm VERY new to Splunk. I've been googling trying to find the proper search, but I'm coming up empty.  We had someone make a change to an account in outlook. We need to know who it was. It's not a typical user account. it's for a conference room to book it for meetings.  again, very new to Splunk. My boss asked me to try and figure this out. I appreciate any help you can offer. 
@links to members 'search earliest=-10m latest=now index= 'xyz' (host=abcd123 or host=abcd345) TxnStart2End| rex "Avg=(?<avgRspTime>\d+)"  | rex "count=(?<count>\d+)"  |timechart span=5m sum(coun... See more...
@links to members 'search earliest=-10m latest=now index= 'xyz' (host=abcd123 or host=abcd345) TxnStart2End| rex "Avg=(?<avgRspTime>\d+)"  | rex "count=(?<count>\d+)"  |timechart span=5m sum(count) as Vol, avg(avgrsptime) as "ART" | eval TPS=(vol/300) | table _time Vol Avgresptime TPS | sort_time'   the above query will fetch every 5 mins records so no worries but the issue is if the splunk job failed and run after half an hour for example:   suppose my job last run is 10:00am  and it fetch records until 10:00 AM for every 5 mins spam. my job got failed at 10:01 am and it will run again at 11:00 am, but in between 10:01 am to 11:00 am data is missing ( so my requirement is I need missing data in the spam of for every 5 mins) i.e 10:05 data, 10:10 data ...10:50, 10:55 and 11:00 data.. please help with correct query.
Hello, i am trying to anonymize data in forwarder using the below: The data AABC123456789012 needs to be transformed to AABC12XXXXXX9012 The regex seems to be not working. Any help is appreci... See more...
Hello, i am trying to anonymize data in forwarder using the below: The data AABC123456789012 needs to be transformed to AABC12XXXXXX9012 The regex seems to be not working. Any help is appreciated.  Mar 31 13:34:56 10.209.7.69 Mar 31 13:34:56 1234567890_admin yia0WAM 65.92.243.116 eyuiopppp.***.com 123.55.000.88 - AABC123456789012 [31/Mar/2022:13:34:39 -0400] 'GET /me-and-***/***intranetstandards/_assets-responsive/v1/fonts/trtr/rtyruroop-ghjtltutt-webfont.woff HTTP/1.1' 200 29480 erty-tyunht.pg.uhg.com 31/Mar/2022:13:34:39.531 -0400 6163 text/plain; charset=UTF-8 "https://****.yyy.com/assets/hr/css/*******.min.css" tranforms.conf [abcbc_isam] REGEX = 'AABC[0-9]{5,16}' DEST_KEY = _raw FORMAT = $1AABC[0-9]{2}XXXXXX[0-9]{4}$2   props.conf [host::AE110501] TRANSFORMS-set= abcbc_isam disabled = false
I have a query to search particular event id's from Active Directory and see what Targets these apply to.  Instead of listing 100 different AD groups, I chose to use a lookup table.  My query is as f... See more...
I have a query to search particular event id's from Active Directory and see what Targets these apply to.  Instead of listing 100 different AD groups, I chose to use a lookup table.  My query is as follows: index=<index name> EventID IN (4728,4729) TargetUserName IN [| inputlookup Test_Splunk_Lookup_Table_v2.csv | return 200 "$Group_Name"] | eval EventID=case(EventID=="4728","Added",EventID=="4729","Removed")| rename Computer AS "Domain Controller",TargetUserName AS "Group",EventID AS "Action"| table "_time","SubjectUserName","Action","MemberName","Group","Domain Controller" The search works well as long as the Group Names in the lookup tables are unique.  But if there is an entry in the lookup table that has derivatives(i.e. AD_Group), it returns all the derivatives also instead of what is in the lookup table only. EX. Lookup Table Group_Name column contains "AD_Group", "AD_Group_1", "AD_Group_2" The search returns all the above groups plus additional groups not in the lookup table; AD_Group_3, AD_Group_4, etc... I need to know how I can just return the entries in the list and not the derivatives of AD_Group.
I would like to understand how Splunk SOAR sends data to the indexer endpoints that are configured under Administration -> Search Settings -> Indexers. I would like to send data to two different HEC ... See more...
I would like to understand how Splunk SOAR sends data to the indexer endpoints that are configured under Administration -> Search Settings -> Indexers. I would like to send data to two different HEC endpoints (two different Splunk instances), but I'm not sure if Splunk SOAR treats multiple indexers as something to load balance or multiple things to send all data to. I attempted to use _TCP_Routing on one of the HEC endpoints to take care of this issue, but it doesn't seem to work right so I figured I'd go back to the source. Anyway, if anyone knows how that works, I'd appreciate the insight! Thanks.
I'm trying to run the following commands on an index:   | eval elast=strptime(lastSeen,"%Y-%m-%d %H:%M:%S") | eval daysSinceLastSeen = round((now() - elast)/86400, 1) ```Calculate days elapsed si... See more...
I'm trying to run the following commands on an index:   | eval elast=strptime(lastSeen,"%Y-%m-%d %H:%M:%S") | eval daysSinceLastSeen = round((now() - elast)/86400, 1) ```Calculate days elapsed since lastSeen``` | eval active_status = if ((latest (daysSinceLastSeen) <= 28), "active", "inactive")   There is an error that keeps popping up stating 'latest' function is unsupported or undefined. How do I correct that?
Hello, I have a field I created called daysSinceLastSeen that shows the days since an asset was last seen in a scan. I now want to create a histogram to show the distribution of that data by days. ... See more...
Hello, I have a field I created called daysSinceLastSeen that shows the days since an asset was last seen in a scan. I now want to create a histogram to show the distribution of that data by days. How do I do that in spl?   In case you need my search, it is as follows:   | eval elast=strptime(lastSeen,"%Y-%m-%d %H:%M:%S") | eval daysSinceLastSeen = round((now() - elast)/86400, 1) ```Calculate days elapsed since lastSeen``` | table _time, status, asset_id, scanID, lastSeen, daysSinceLastSeen, last*, firstSeen, ipaddress, source, host | sort - _time  
I am looking to set up an alert that will trigger when no messages have been sent to a queue in the last X number of minutes. Does any one have a sample of a similar alert? Thanks in advance!!