Splunk Search

Time Range Question

zacksoft_wf
Contributor

I have doubts that this Saved Search may not be properly engineered  and very taxing in terms of how time range is specified.

This Saved search is basically responsible for populating a lookup. 
It ends with | outputlookup <lookup name>
The range of the scheduled saved search is defined as,
 earliest = -7d@h
latest = now


In the saved search there is a logic added before the last time, that filters the event based on last 90 days.
The search ends Like this,
..........
..........
...........
| stats
min(firstTime) as firstTime
, max(lastTime) as lastTime
by
dest
, process
, process_path
, SHA256_Hash
, sourcetype
| where
lastTime > relative_time(now(), "-90d")
| outputlookup LookUpName

==================================
My Question is, How would the search behave? Would its scan range cover last 90 days or will limit itself to 7 days. Which time range will take precedence ?

Labels (1)
0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi @zacksoft_wf ,

Unless you configure the time limits in the main search with earliest and latest, the timeframe is as defined.
In your case it is 7 days, so the 90 days check is useless.

Only one question: I suppose that "lastTime" is a field in your lookup, could it have values greather 7 days or is it  calculated by _time?

It might be different if you used a timed lookup and used the append = true option because in that case you could have values greater than 90 days.

I suggest you review your search design.

Ciao.

Giuseppe

View solution in original post

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @zacksoft_wf ,

Unless you configure the time limits in the main search with earliest and latest, the timeframe is as defined.
In your case it is 7 days, so the 90 days check is useless.

Only one question: I suppose that "lastTime" is a field in your lookup, could it have values greather 7 days or is it  calculated by _time?

It might be different if you used a timed lookup and used the append = true option because in that case you could have values greater than 90 days.

I suggest you review your search design.

Ciao.

Giuseppe

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @zacksoft_wf ,

good for you, see next time!

Ciao and happy splunking

Giuseppe

P.S.: Karma Points are appreciated 😉

zacksoft_wf
Contributor

@gcusello   I fetch the process and time info  from the Endpoint Datamodel. And it could have value greater than 7d.
This is the SPL. (Trying to find out anomalous process)

 

| tstats `summariesonly` 
    count min(_time) as firstTime
    , max(_time) as lastTime
    from 
    datamodel=Endpoint.Processes
    where
    nodename=Processes
   
    NOT ( 
    [| inputlookup myLookUp_7d 
    | table SHA256_Hash 
    | rename SHA256_Hash AS Processes.SHA256_Hash]
    )
    by 
    Processes.dest 
    , Processes.process 
    , Processes.process_path 
    , Processes.SHA256_Hash 
    , sourcetype 
| `drop_dm_object_name("Processes")` 
| eval 
    process_path=if(isnull(process_path),"?",process_path) 
| inputlookup append=T myLookUp_1d
| stats 
    min(firstTime) as firstTime
    , max(lastTime) as lastTime 
    by 
    dest
    , process
    , process_path
    , SHA256_Hash 
    , sourcetype
| where 
    lastTime > relative_time(now(), "-90d") 
| Outputlookup myLookUp_1d

 

 

Will this search scan for 90 days then ?

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @zacksoft_wf,

as I said, lastTime is the max of _time, so taking 7 days as time range you cannot match the condition, try to take a period larger than 90 days and see if you have results

In addition there are macros, so it isn't so easy to read.

Ciao.

Giuseppe

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...