I am currently have 2 tables:
date, common-granularity, groupId-1, value-1
date, common-granularity, groupId-2 ,value-2
I want to do some intermediate stats_operations such as:
|stats latest(value-1) as latest_value1 by date, common-granularity, groupId-1
|stats count(latest_value1) as count_value1, sum(inter_value1) as sum_value1 by date, common-granularity
|stats count(value-2) as count_value2 by date, common-granularity
Ideally, I want to do stats operation on Table-1 and Table-2 separately, and then combine the operation results based on date, common-granularity.
For example, the expected final result is like:
date, common-granularity, intermediate_result_from_table_1(count_value1, sum_value1), intermediate_result_from_table_2(count_value2)
I knew |join with sub-search can achieve this, but I am concern about the |join max-out limitation.
So I am looking for any possible other methodologies.
I had answered similar query few days ago. Here is my answer , you can use multiple aggregate functions
You can use the same search for these optimization issue.
I prefer to write it like this.
index=indexname | stats dc(yourfield) count(eval(anotherfield=fieldvalue)) by other field names
@zztc2004 you can explore multisearch command or union command, which will not run into subsearch limitations depending on your use case. However, for the community to assist you better, you might have to add some sample values(mock/anonymized data) for your two events you want to correlate.