Splunk ITSI

ITSI not supporting earliest and latest in KPI searches?

meleperuma
Explorer

I experienced this whie working on a Splunk ITSI cloud project. The client wanted to see if there had been a drop in certain type of events in the last 1 hour compared to the average of the same hour 1 week and 2 weeks back.

Apparently : ITSI does not support base searches with earliest and latest statements (time modifiers)

If you create a KPI like that you would not get any error but the KPI summary would not be populated with the expected values. the Alert value in ITSI_summary index would be just N/A

So if you have a search like this...

index=logs sourcetype="searchlogs" Code="*"  earliest="-1h" latest=now
| top name countfield=recent limit=0 showperc=0 
| join name type=outer 
    [ search index=logs sourcetype="searchlogs" Code="*"   earliest="-169h" latest="-168h" 
    | top name countfield=distant1 limit=0 showperc=0 ] 
| join name type=outer 
    [ search index=logs sourcetype="searchlogs" Code="*"   earliest="-337h" latest="-336h" 
    | top name countfield=distant2 limit=0 showperc=0 ] 
| fillnull distant1 distant2 recent 
| eval avg_searches=(distant1+distant2)/2 
| eval search_dif=recent-avg_searches 
| eval search_dif=abs(if(search_dif>0,0,search_dif)) 
| where search_dif>0

It would not create any values for the KPI. on the Service analyzer the Service would show up as "N/A" in Grey. And when you click it and go in to Service detail, the KPI will not show any values and would be showing "NaN". If you search the itsi_summary index there would be no values for the alert_value for that KPI.

Instead if you only remove the earliest and latest modifiers from the base search and pick the search window from selection list when creating the KPI in search (or KPI base search) as follows:

    index=logs sourcetype="searchlogs" Code="*" 
    | top name countfield=recent limit=0 showperc=0 
    | join name type=outer 
        [ search index=logs sourcetype="searchlogs" Code="*"   earliest="-169h" latest="-168h" 
        | top name countfield=distant1 limit=0 showperc=0 ] 
    | join name type=outer 
        [ search index=logs sourcetype="searchlogs" Code="*"   earliest="-337h" latest="-336h" 
        | top name countfield=distant2 limit=0 showperc=0 ] 
    | fillnull distant1 distant2 recent 
    | eval avg_searches=(distant1+distant2)/2 
    | eval search_dif=recent-avg_searches 
    | eval search_dif=abs(if(search_dif>0,0,search_dif)) 
    | where search_dif>0

It works!

So what if you want to search for a different time range than the options on the 'Calculation Window' drop down (which are last 1min, 5min, 15min and 24 hours)? Like last 1 hour?

  1. first save the KPI with a 'Calculation Window' from the drop down. choose a short 'KPI Search Schedule' like every 1min or 5min
  2. if its a KPI base search you need to assign it to a service.
  3. remember the KPI name assigned to the service, let say 'Search Drop'
  4. search index="itsi_summary" kpi="Search Drops"
  5. look for the value in 'search_name' field. something like "Indicator - Shared - 5ae27a58892b3fcfba2ec5ed - ITSI Search"
  6. open Settings>Searches, Reports, and Alerts
  7. look for that search and click on Edit Search
  8. There you would see the field 'Earliest Time' and 'Latest Time'. You can change the values there to match the base search time range in rlative time abbreviations. If you initially added a time lag for the KPI search (to compensate for indexing lag), remember to ad that in to the values ex: last 1 hour with 120s lag: Earliest time=-3720s Latest time=-120s

I would like to hear if anyone else has experienced this? If so I'd like to make a feature request to make the calculation window customizable and document the limitation of the base search for KPI.

MrWhoztheBoss
Explorer

Perfect, just to fast-track the process of getting service KPI ids we can use "service_kpi_lookup" to find kpi_id and directly search using that id in saved searches to spot KPI base search.

| inputlookup service_kpi_lookup | search title="your_service_name"

 

0 Karma
Get Updates on the Splunk Community!

New in Observability - Improvements to Custom Metrics SLOs, Log Observer Connect & ...

The latest enhancements to the Splunk observability portfolio deliver improved SLO management accuracy, better ...

Improve Data Pipelines Using Splunk Data Management

  Register Now   This Tech Talk will explore the pipeline management offerings Edge Processor and Ingest ...

3-2-1 Go! How Fast Can You Debug Microservices with Observability Cloud?

Register Join this Tech Talk to learn how unique features like Service Centric Views, Tag Spotlight, and ...