Activity Feed
- Posted Re: Search to check if request duration search has recovered on Splunk Search. 02-21-2024 05:20 AM
- Karma Re: Search to check if request duration search has recovered for ITWhisperer. 02-21-2024 05:20 AM
- Posted Re: Search to check if request duration search has recovered on Splunk Search. 02-20-2024 05:38 AM
- Posted Re: Search to check if request duration search has recovered on Splunk Search. 02-16-2024 07:15 AM
- Posted Search to check if request duration search has recovered on Splunk Search. 02-15-2024 03:25 AM
- Tagged Search to check if request duration search has recovered on Splunk Search. 02-15-2024 03:25 AM
- Tagged Search to check if request duration search has recovered on Splunk Search. 02-15-2024 03:25 AM
- Posted Re: Timechart % failures every 30 mins from nginx access logs on Splunk Search. 02-15-2024 02:51 AM
- Karma Re: Timechart % failures every 30 mins from nginx access logs for ITWhisperer. 02-15-2024 02:51 AM
- Posted Timechart % failures every 30 mins from nginx access logs on Splunk Search. 02-13-2024 07:14 AM
- Tagged Timechart % failures every 30 mins from nginx access logs on Splunk Search. 02-13-2024 07:14 AM
- Tagged Timechart % failures every 30 mins from nginx access logs on Splunk Search. 02-13-2024 07:14 AM
- Posted Re: Group results and list common occurences by time on Splunk Search. 11-25-2022 02:10 AM
- Posted Re: Group results and list common occurences by time on Splunk Search. 11-25-2022 01:59 AM
- Posted How to group results and list common occurrences by time? on Splunk Search. 11-24-2022 08:51 AM
- Posted Re: Count events with differing strings in same field on Splunk Search. 09-28-2021 08:47 AM
- Karma Re: Count events with differing strings in same field for PickleRick. 09-28-2021 08:47 AM
- Posted Re: Count events with differing strings in same field on Splunk Search. 09-27-2021 02:49 AM
- Karma Re: Count events with differing strings in same field for richgalloway. 09-27-2021 02:42 AM
- Posted Count events with differing strings in same field on Splunk Search. 09-24-2021 02:17 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 |
02-21-2024
05:20 AM
Absolutely perfect, thank you!
... View more
02-20-2024
05:38 AM
Thanks again @ITWhisperer. Is there any way to restrict to the previous 2 times bins in the query as the cron scheduler doesn't fire exactly on the hour and getting 3 bins as you said. Thinking of running at 1:05pm and if that could get the 12:30-45 & 12:45-1 bins, I think that would work well.
... View more
02-16-2024
07:15 AM
Thanks @ITWhisperer but this doesn't seem to work. I've simulated the average request time being over 1 second in the logs and this search returns alert=1 straight away. When I'd want it to return this when searching the 2nd time window to say that we'd actually recovered from this high request time. Can you explain what is happening from streamstats onwards as I can't get my head round it? I don't get how this separates the 2 time windows. I've been running the search manually looking back 30 mins and it just returns alert=1 every time. FYI initially got my time window wrong and it is actually checking every 15 minute window so I'd want to compare the 2x 15 min windows over the last 30 mins to see if it has recovered. I don't think this makes a difference to the query though.
... View more
02-15-2024
03:25 AM
index=my_index source="/var/log/nginx/access.log"
| stats avg(request_time) as Average_Request_Time
| where Average_Request_Time >1 I have this query setup as an alert if my web app request duration goes over 1 second and this searches back over a 30 min window. I want to know when this alert has recovered. So I guess effectively running this query twice against 1st 30 mins of an hour then 2nd 30 mins of an hour then give me a result I can alert when that gets returned. The result would be an indication that the 1st 30 mins was over 1 second average duration and the 2nd 30 mins was under 1 second average duration and thus, it recovered. I have no idea where to start with this! But I do want to keep the alert query above for my main alert of an issue and have a 2nd alert query for this recovery element. Hoep this is possible.
... View more
02-15-2024
02:51 AM
Works perfect, thanks!
... View more
02-13-2024
07:14 AM
index=myindex source="/var/log/nginx/access.log" |
eval status_group=case(status!=200, "fail", status=200, "success") |
stats count by status_group |
eventstats sum(count) as total |
eval percent= round(count*100/total,2) |
where status_group="fail" Looking at nginx access logs for a web application. This query tells me the amount of failures (non 200), total amount of calls (all msgs in log) and the % of failures vs total. As follows: status_group count percent total fail 20976 2.00 1046605 What I'd like to do next is timechart these every 30m to see what % of failures I get in 30 min windows but the only attempt where I got close did it as a % of the total calls in the log skewing the result completely. Basically a row like above but for every 30 min of my search period. Feel free to rewrite the entire query as I cobbled this together anyway.
... View more
11-25-2022
02:10 AM
Oh, figured out the way to do the sort as I want: index=index source="/var/log/nginx/access.log"
| where status!=200
| sort time_local
| stats list(time_local) by request_length status body_bytes_sent remote_addr Thanks again @yuanliu 😃
... View more
11-25-2022
01:59 AM
Oooohhhhh I didn't realise it was that simple! Thank you. To finish off, know how I can sort the timestamps within the grouped rows? The existing sort does the whole list by the first entry it seems.
... View more
11-24-2022
08:51 AM
I have the following data:
{
"remote_addr": "1.2.3.4",
"remote_user": "-",
"time_local": "24/Nov/2022:09:55:46 +0000",
"request": "POST /myService.svc HTTP/1.1",
"status": "200",
"request_length": "4581",
"body_bytes_sent": "4891",
"http_referer": "-",
"http_user_agent": "-",
"http_x_forward_for": "-",
"request_time": "0.576"
}
These are nginx access logs. I have a situation where certain requests are failing and then retrying every hour or so. I want to identify these as best I can. So...
Return results where status!=200
Group where:
remote_addr matches, and
request_length matches, and
status matches, and
body_bytes_sent matches (I'm making the presumption these would be our identical requests with same values for these)
Create a table of these results showing the time_local for each occurence
Order time_local within each row (from earliest to latest)
This would leave rows where the above matches aren't made and I'd just want these listing on individual rows
This is beyond my capabilities and I got this (not very) far:
index=index source="/var/log/nginx/access.log" |
where status!=200 |
stats list(time_local) by request_length |
sort - list(time_local)
This is sort of what I want but doesn't do any matching. It does group the time_local against the request_length which is how I'd like the output (but including the other fields for visibility). Also, the sort doesn't work as it seems to sort by the first record in each row and I want it to sort WITHIN the row itself.
This the output:
request_length
list(time_local)
26562
24/Nov/2022:16:19:20 +0000 24/Nov/2022:14:16:45 +0000 24/Nov/2022:12:15:04 +0000 24/Nov/2022:11:15:01 +0000 24/Nov/2022:15:18:02 +0000
41977
24/Nov/2022:16:19:20 +0000 24/Nov/2022:14:16:45 +0000 24/Nov/2022:12:15:04 +0000 24/Nov/2022:11:15:01 +0000 24/Nov/2022:15:18:02 +0000 24/Nov/2022:13:15:06 +0000
But I want it to look more like this...
request_length
status
body_bytes_sent
remote_addr
time_local
26562
500
4899
1.2.3.4
24/Nov/2022:11:15:01 +0000 24/Nov/2022:12:15:04 +0000 24/Nov/2022:14:16:45 +0000 24/Nov/2022:15:18:02 +0000 24/Nov/2022:16:19:20 +0000
41977
500
5061
6.7.8.9
24/Nov/2022:11:15:01 +0000 24/Nov/2022:12:15:04 +0000 24/Nov/2022:13:15:06 +0000 24/Nov/2022:14:16:45 +0000 24/Nov/2022:15:18:02 +0000 24/Nov/2022:16:19:20 +0000
... View more
09-28-2021
08:47 AM
Thanks @PickleRick this did the trick on the chart 🙂
... View more
09-27-2021
02:49 AM
Thank you @richgalloway for the explanation. Stats look great but it isn't charting properly and I'm not sure why. Seems to be putting the first count on the X-axis then charting the other two counts.
... View more
09-24-2021
02:17 AM
So this search... index="myindex" source="/data/logs/log.json" "Calculation Complete" ... the results return a MessageBody field which has various different strings in. I need to do the most simple regex in the world (*my string) and then want to count the messages which match that string eventually charting them. I thought this would work, but it just returns 0 for them all. index="myindex" source="/data/logs/log.json" "Calculation Complete"
| stats
| count(eval(MessageBody="*my string")) as My_String
| count(eval(MessageBody="*your string")) as Your_String
| count(eval(MessageBody="*other string")) as Other_String Help 🙂
... View more
09-21-2021
07:43 AM
Hi, I'm after a query that I can alert with which shows if one of my hosts hasn't logged a particular message in the last 5 mins. I have 4 known hosts and ideally, wouldn't want a query/alert for each. index="index" source="/var/log/log.log "My Specific Message" earliest=-5m latest=now
|stats count by host So this gives me a count of that specific event for each of my hosts. I want to know if one (or more) of these drops to zero in the last 5 mins. All the hostnames are known so can be written into the query. Not really got close with this one so some help would be appreciated. Thanks!
... View more
Labels
- Labels:
-
alert condition