Knowledge Management

In dashboards we have lookups which is slow so need an alternative approach like summary index or KV store

vijaysri
Contributor

Hi,

In dashboards we have lookups which is slow so need an alternative approach like summary index or KV store

The lookup volume is sooo high

We tried to go with summary index which uses subsearches, but there is a limit, were subsearches > 50K will be skipped so we were not able to go with summary index.

Is there any other possible ways?

Labels (3)
0 Karma

bowesmana
Champion

When using KV store be sure to consider using accelerated fields

https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/kvstore/usingconfigurationfiles/#...

they can significantly improve performance if set up correctly.

 

0 Karma

Funderburg78
Path Finder

One way is to create a kvstore Review this article and it also links to the Splunk Docs: https://community.splunk.com/t5/Knowledge-Management/How-do-I-view-use-my-Splunk-KV-store-collection...

 

The quicker easier lazier and not best way it to build reports which generate data locally on the searchhead to a csv file using |outputlookup or |outputcsv and a file name.  it would look like this:

index=whatever  "your other code here"  | outputlookup mydata.csv

then on your dashboard you call the csv file data and sort or table or count whatever...

so for a table view I do:

|inputlookup mydata.csv | table field 1 field 3, field 5

If you are creating large data files you may want to use inputcsv instead as this will not replicate across distributed systems.  There are downsides to inputcsv also!

 

https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Inputlookup

https://docs.splunk.com/Documentation/Splunk/8.1.3/SearchReference/Inputcsv

https://community.splunk.com/t5/Archive/what-is-the-difference-between-inputcsv-and-inputlookup/m-p/...

 

Good luck!  And don't forget Karma if this helped you!!!

 

 

0 Karma