Other Usage

How to retrieve failed and success percentages within the time mentioned in IP's in our CSV file?

jackin
Path Finder

Point 1:

I need to use the logs only specific timings to bring the output (timings like 7am to 8pm weekdays only that to date is 1st Jan to 17th Jan and 31st jan)...

Point 2: We are receiving a log from the host(host=abc) and we have one interesting field named Ip_Address.
In this field ,we have mutiple IP's and event is indexing for each 5 min of interval like(Ping success for Ip_Address=10.10.101.10 OR Ping failed for Ip_Address=10.10.101.10).

 

FYI, if I am getting events like(1:00pm ping failed and 1:05pm ping success) in this case we are not considering as failed percentage.
So, basically if count of failure is more than one time(means Continuously like 1:00pm ping failed and 1:05pm ping failed ) then only it will be considered as failure.

I do not want all IP address data. Only data need certain IP Addresses are required at the following timings...we need failed and success percentage within the time to mentioned Ip's in our CSV file

final output like


IP_Address Failed% Success%
1.1.1.1.          0.5.          99.5

Labels (1)
0 Karma

jackin
Path Finder

Presently I am using this below query but it gives failed logs related Ip_Address percentage only but we need all the Ip_Address(ping.csv) failed and success percentages

 

index=os sourcetype=ping_log 
[ inputlookup Ping.csv]
(earliest="01/04/2022:07:00:00" latest="1/07/2022:18:00:00") OR (earliest="01/10/2022:07:00:00" latest="1/14/2022:18:00:00")OR (earliest="01/17/2022:07:00:00" latest="1/21/2022:18:00:00")OR (earliest="01/31/2022:07:00:00" latest="1/31/2022:18:00:00")
| eval date_hour=strftime(_time, "%H")
| search date_hour>=8 date_hour<=18
| rex field=_raw "Ping (?<status>\w+) for Ip_Address=(?<ip_address>\d+\.\d+\.\d+\.\d+)"
| eventstats count as total by ip_address
| sort 0 ip_address _time
| where status="failed"
| streamstats window=2 global=f range(_time) as time_difference by ip_address
| where time_difference=300
| stats count values(total) as total by ip_address
| eval failed=100*count/total
| eval success=100-failed
| table ip_address failed success   

 

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...