Hi all,    Using Splunk cloud I'm trying to look up the time difference between when a message is received from a sender and was delivered to a recipient. I have a lookup which has all the message ids totaling about 400k entries. For each message id, I'm interested in finding how long it took for the server to process the message.  Splunk seems to truncate the subsearch to 10000.  Here is what was noticed for the search below:  Subsearch produced 423340 results, truncating to maxout 10000.    index="stage" (host="msgsrv*" source="/var/log/messaging/msg.log" [|inputlookup sept-messages | fields id ]  (    ( event=msg_rcvd AND "tag=""body""") OR ( event=msg_sent AND "tag=""body""") OR (event=msg_sent AND "tag=""result""")  )  )  |reverse  |eval msg_id = id  |eval msg_rcvd_time=if(event == "msg_rcvd", _time, 999999999999.999)  |eval client_out_time=if(event == "msg_sent", _time,999999999999.999)  |stats values(msg_rcvd_time) AS ins values(msg_sent_time) AS outs values(_time) AS times values(_raw) as raws values(to_user) AS to_users values(from_user) AS from_users by msg_id, to_user  |eval first_msg_rcvd_time = mvindex(mvsort(ins), 0)  |eval first_msg_sent_time = mvindex(mvsort(outs), 0)  |eval delta = first_msg_sent_time - first_msg_rcvd_time  How could the large lookup be processed and any suggestions for improving the above query? 
						
					
					... View more