Splunk Enterprise

Need help with building faster searches- How to make Dashboard efficient?

dsenapaty
Explorer

Hello All,

 

I am new to splunk and trying to create an executive level dashboard for few of our enterprise applications. All these applications produces very large data like 100GB/day and my normal indexed searches are taking for ever and sometimes timing out as well. Need some guidance on how i could achieve faster/optimised searches 

 

I tried using tstats but i am running into problems as the data is not totally structured and i am not able to do aggregate functions on response times as that data has string "ms" at the end or doesn't have KV type .  Like below examples.

 

Data 1 : 

2022-09-11 22:00:59,998 INFO -(Success:true)-(Validation:true)-(GUID:68D74EBE-CE3B-7508-6028-CBE1DFA90F8A)-(REQ_RCVD:2022-09-11T22:00:59.051)-(RES_SENT:2022-09-11T22:00:59.989)-(SIZE:2 KB)-(RespSent_TT:0ms)-(Actual_TT:938ms)-(DB_TT:9ms)-(Total_TT:947ms)-(AppServer_TT:937ms)

 

Data 2: 

09/27/2022 16:34:57:998|101.123.456.789|106|1|C97EC2DA10C64F64A83C87AEEC1CDDBE703A546E1B554AD1|POST|/api/v1/resources/ods-passthrough|200|97

Data 2 fields:

date_time|client_ip|appid|clientid|guid|http_method|uri_path|http_status_code|response_time

 

 

Need help/suggestions on how to achieve faster search.

Labels (1)
Tags (1)
0 Karma

bowesmana
SplunkTrust
SplunkTrust

Lots of ways to make things faster and more efficient. If you're looking to use that ms timing counter as a number, then you should extract it as a field.

tstats will not give you data unless you're taking it from a datamodel, in which case, you will no doubt have extract fields by virtue of having passed it through the model.

Efficient searching is about taking the minimum amount of data to satisfy the search, so give as many restrictive criteria as possible, then aggregate to reduce the data volume as much as possible.

Dashboard efficiency can be achieved by using base searches

https://docs.splunk.com/Documentation/Splunk/9.0.1/Viz/Savedsearches

Another technique is to have a saved search that runs frequently and performs some aggregation of the large volume of data and then save that aggregated data back to a summary index and then your dashboard can search from the already created aggregations.

As for extracing those ms values at search time, here's an example that will extract all the (*_TT:NNms) fields from your example line

| makeresults 
| eval _raw="2022-09-11 22:00:59,998 INFO -(Success:true)-(Validation:true)-(GUID:68D74EBE-CE3B-7508-6028-CBE1DFA90F8A)-(REQ_RCVD:2022-09-11T22:00:59.051)-(RES_SENT:2022-09-11T22:00:59.989)-(SIZE:2 KB)-(RespSent_TT:0ms)-(Actual_TT:938ms)-(DB_TT:9ms)-(Total_TT:947ms)-(AppServer_TT:937ms)"
| rex max_match=0 "\((?<fn>\w+)_TT:(?<tt>\d+)ms\)"
| foreach 0 1 2 3 4 [ eval f=mvindex(fn, <<FIELD>>), tt_{f}=mvindex(tt, <<FIELD>>) ]
| fields - fn tt f

The first two lines set up your example and the last 3 lines extract those numbers and create field names tt_XX where XX is the name of the time taken field and the value is the time excluding milliseconds.

 

0 Karma
Get Updates on the Splunk Community!

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...