Splunk Search

How to edit my search to display my results in a trendline?

jhampton3rd
Explorer

alt textalt text

I'm trying to graph this same type of trendline (2nd Screenshot) in Splunk with daily results from 12pm-12pm. I'm using the following search which gives me my results per 24 hours (1st screenshot) but I'm unable to find the trendline command to give me results like in the 2nd screenshot:

index=#### sourcetype=##### 
| lookup csirt_asset_list ip OUTPUT sublocation   
| search hasBeenMitigated=0 sublocation=*   
| stats dc(ip) as Total1 by sublocation
| join [ search index=#### sourcetype=##### pluginID<1000000 baseScore>0   
| lookup csirt_asset_list ip OUTPUT sublocation   
| search hasBeenMitigated=0 sublocation=* 
| stats dc(ip) as Total2 by sublocation] 
| join [search index=#### sourcetype=##### pluginID<1000000 baseScore>0   
| lookup csirt_asset_list ip OUTPUT sublocation   
| search hasBeenMitigated=0 sublocation=*   
| stats count as counted by baseScore, sublocation   
| fields + sublocation, baseScore, counted  
| sort-baseScore  
| lookup weight_lookup baseScore OUTPUT wmultiplier  
| eval aaa=(counted * wmultiplier) 
| eventstats sum(aaa) as test1, sum(counted) as test2
| eval bbb=(test1 / test2)  
| eval bbb=round(bbb,2)]
| eval cvss_overall_score=bbb*(Total2/Total1) 
| stats last(cvss_overall_score) by sublocation

Thanks for your help.

0 Karma

aaraneta_splunk
Splunk Employee
Splunk Employee

@jhampton3rd - Were you able to test out DalJeanis' solution? Did it work? If yes, please don't forget to resolve this post by clicking on "Accept". If you still need more help, please provide a comment with some feedback. Thanks!

0 Karma

jhampton3rd
Explorer

I'll be trying these suggestions tomorrow when I get back to the office. Thanks for everyone's input. Was away for a couple of weeks.

0 Karma

DalJeanis
Legend

I spent a long time analyzing your query. Good thing that the issue was not a rattlesnake, because I would probably be dead by now. 😉

Basically, the issue is that you don't have any _time associated with your summary results, so there's nothing there to timechart. If you want the results for past days, you're going to have to work a time factor into your query that extracts the data. You've probably had it windowed to a single day's data when you run it.

Here's a first crack at that. Notice that I've taken each subquery and been extremely explicit at what it sends back to match to the original records.

index=#### sourcetype=##### hasBeenMitigated=0
| lookup csirt_asset_list ip OUTPUT sublocation 
| search  sublocation= 
| eval mydate=relative_date(_time,"-0d@d")
| stats dc(ip) as Total1 by sublocation mydate
| join 
    [ search index=#### sourcetype=##### hasBeenMitigated=0 pluginID0 
    | eval mydate=relative_date(_time,"-0d@d")
    | fields + ip mydate
    | lookup csirt_asset_list ip OUTPUT sublocation 
    | search sublocation= 
    | stats dc(ip) as Total2 by sublocation mydate
    | fields + sublocation mydate Total2
    ] 
| join 
    [search index=#### sourcetype=##### hasBeenMitigated=0 pluginID0 
    | eval mydate=relative_date(_time,"-0d@d")
    | fields + ip baseScore mydate
    | lookup csirt_asset_list ip OUTPUT sublocation 
    | search sublocation= 
    | stats count as counted by baseScore, sublocation, mydate 
    | fields + sublocation, baseScore, counted, mydate 
    | sort 0 -baseScore 
    | lookup weight_lookup baseScore OUTPUT wmultiplier 
    | eval aaa=(counted wmultiplier) 
    | eventstats sum(aaa) as test1, sum(counted) as test2 by mydate
    | eval bbb=(test1 / test2) 
    | eval bbb=round(bbb,2)
    | fields + sublocation mydate bbb 
    ] 
| eval cvss_overall_score=bbb*(Total2/Total1) 
| stats last(cvss_overall_score) as cvss_overall_score by sublocation mydate
| eval _time = mydate

If I understood the original query correctly, the last() was always operating against a single summary value, so first() or avg() would have gotten the same results. if so, then that is still the case here.


edited to use sort 0 rather than sort, in case the subsearch returns more than 100 records

0 Karma

DalJeanis
Legend

On an ongoing basis, it might be better to take those subqueries and send their daily results out to a csv file, reading the file in every day and appending just one-day's new calculations.

Honestly, if (as I suspect) bbb is a single value for each day, then at the end of the second subquery you should replace everything from | eventstats on with

| stats sum(aaa) as test1, sum(counted) as test2 by mydate
| eval bbb=round((test1/test2),2)
| fields + mydate, bbb
]
0 Karma

jhampton3rd
Explorer

Thanks DalJeanis,

I think I'm going to try sending the results to a .csv file. I believe this will be the easiest approach. Just trying to get that to work since this will be the first time I've done this. Thanks for the suggestions though. Still wasn't able to get the query to produce a trendline.

0 Karma

DalJeanis
Legend

So, if I understand your code, hasBeenMitigated is a field on the events, not the lookup table. IF so, I'd move that specification up to each initial search rather than running lookups on records in the search and subsearches that you don't care about and that you are going to discard.

It also seems to me that the second join is going to end up with a single value for bbb which will be applied to every single record by the subsequent eval statement. have I interpreted that correctly?

0 Karma
Get Updates on the Splunk Community!

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...