Splunk Search

How would I prepare this availability calculator?

jerinvarghese
Communicator

Hi Team,

I need a help in preparing a availability calculator.

 

Below graph is the requirement.

target.png

Current output form code below: 

DESCRIPTION downtime Time
QIT-LAG 00:00:06 2022-07-31
QIT-LAG 00:00:09 2022-07-29
QIT-LAG 00:00:08 2022-07-29
QIT-LAG 00:00:10 2022-07-29

 

Current manual action: 

1. Am extracting above table in excel,

2. converting all duration to seconds

3. grouing it with Day wise.

4. preparing a percentage loss out of 86400 (24*60*60) on each day is the graph.


CODE: 

 

 

index=opennms 
| search DESCRIPTION="QIT-LAG"

| transaction nodelabel startswith=eval(Status="DOWN") endswith=eval(Status="UP") keepevicted=true
| eval downtime=if(closed_txn=1,duration,null)
| eval downtime=tostring(downtime, "duration")
| fillnull value="" downtime
| eval Status=if(closed_txn=1,"UP","DOWN")
| rex field=downtime "(?P<downtime>[^.]+)"
| rename _time as Time
| fieldformat Time=strftime(Time,"%Y-%m-%d")
    
| table DESCRIPTION, downtime, Time,

 

 

 

Challenge: 

how to convert the current downtime into seconds and also add it with day basis and prpeare a percentage basis graph.


Thanks In advance for guidance and help. 

 

 

 

Labels (5)
0 Karma
1 Solution

richgalloway
SplunkTrust
SplunkTrust

You've done most the work already.  Downtime was in seconds before it was converted to a string.  Use the stats command to group results by day then use eval to compute the percentage loss.

 

index=opennms DESCRIPTION="QIT-LAG"
| transaction nodelabel startswith=eval(Status="DOWN") endswith=eval(Status="UP") keepevicted=true
| eval downtime=if(closed_txn=1,duration,null)
| fillnull value="" downtime
| rename _time as Time
| fieldformat Time=strftime(Time,"%Y-%m-%d")
| stats values(DESCRIPTION) as DESCRIPTION, sum(downtime) as total_downtime by Time
| eval pct_loss = (downtime * 100) / 86400
| table DESCRIPTION, downtime, Time, pct_loss

 

---
If this reply helps you, Karma would be appreciated.

View solution in original post

0 Karma

richgalloway
SplunkTrust
SplunkTrust

You've done most the work already.  Downtime was in seconds before it was converted to a string.  Use the stats command to group results by day then use eval to compute the percentage loss.

 

index=opennms DESCRIPTION="QIT-LAG"
| transaction nodelabel startswith=eval(Status="DOWN") endswith=eval(Status="UP") keepevicted=true
| eval downtime=if(closed_txn=1,duration,null)
| fillnull value="" downtime
| rename _time as Time
| fieldformat Time=strftime(Time,"%Y-%m-%d")
| stats values(DESCRIPTION) as DESCRIPTION, sum(downtime) as total_downtime by Time
| eval pct_loss = (downtime * 100) / 86400
| table DESCRIPTION, downtime, Time, pct_loss

 

---
If this reply helps you, Karma would be appreciated.
0 Karma

jerinvarghese
Communicator

@richgalloway , thanks a ton for that suggestion, that worked up to an extend. 

but there was some challenge. I have attached the output.

Gouping of those dates are not happening.

output.JPG

Expected output : 

2022-07-29QIT-LAG99
2022-07-31QIT-LAG99
2022-07-31QIT-ATT 
2022-08-02QIT-ATT 
2022-08-02QIT-LAG98
2022-08-03QIT-LAG99
2022-08-04QIT-LAG97

 

ALso one more chalenge in removing the blank field, how can i Achieve it.

 

0 Karma

richgalloway
SplunkTrust
SplunkTrust

I'm not sure why that didn't work.  Let's try an alternative.

index=opennms DESCRIPTION="QIT-LAG"
| transaction nodelabel startswith=eval(Status="DOWN") endswith=eval(Status="UP") keepevicted=true
```Omit "blank" results```
| where closed_txn=1
| bin span=1d _time
| stats values(DESCRIPTION) as DESCRIPTION, sum(downtime) as total_downtime by _time
| eval pct_loss = (total_downtime * 100) / 86400
| rename _time as Time
| fieldformat Time=strftime(Time,"%Y-%m-%d")
| table DESCRIPTION, total_downtime, Time, pct_loss
---
If this reply helps you, Karma would be appreciated.
Get Updates on the Splunk Community!

Good Sourcetype Naming

When it comes to getting data in, one of the earliest decisions made is what to use as a sourcetype. Often, ...

See your relevant APM services, dashboards, and alerts in one place with the updated ...

As a Splunk Observability user, you have a lot of data you have to manage, prioritize, and troubleshoot on a ...

Splunk App for Anomaly Detection End of Life Announcement

Q: What is happening to the Splunk App for Anomaly Detection?A: Splunk is officially announcing the ...