In dashboards we have lookups which is slow so need an alternative approach like summary index or KV store
The lookup volume is sooo high
We tried to go with summary index which uses subsearches, but there is a limit, were subsearches > 50K will be skipped so we were not able to go with summary index.
Is there any other possible ways?
When using KV store be sure to consider using accelerated fields
they can significantly improve performance if set up correctly.
One way is to create a kvstore Review this article and it also links to the Splunk Docs: https://community.splunk.com/t5/Knowledge-Management/How-do-I-view-use-my-Splunk-KV-store-collection...
The quicker easier lazier and not best way it to build reports which generate data locally on the searchhead to a csv file using |outputlookup or |outputcsv and a file name. it would look like this:
index=whatever "your other code here" | outputlookup mydata.csv
then on your dashboard you call the csv file data and sort or table or count whatever...
so for a table view I do:
|inputlookup mydata.csv | table field 1 field 3, field 5
If you are creating large data files you may want to use inputcsv instead as this will not replicate across distributed systems. There are downsides to inputcsv also!
Good luck! And don't forget Karma if this helped you!!!