Along the same lines as @MuS suggestion, you could easily schedule a search that does a outputlookup from your "lookup search" into a CSV file and then use that lookup file as-is.
There are some good reasons to do this in order to have best performance. Remember, if you have more than one indexer each instance of Splunk is only going to have a fraction of the data local to it. You have no guarantees that the data needed to perform a lookup exists on the same indexer as the data you are wanting to do the lookup against... To be able to "use the results of a search as a lookup", you need to:
Run the first search to "make the lookup"
Distribute its results to all of the indexers
Run the "main search" which uses the lookup
A subsearch can (often) do this, at (usually) a very high cost.
For what you've described so far, what you want is a time-based lookup. See http://docs.splunk.com/Documentation/Splunk/6.2.2/Knowledge/Addfieldsfromexternaldatasources#Set_up_a_time-bounded_lookup for more information, but the idea is that Splunk can take lookup files that have time as an attribute and automatically figure out the "right" record from the lookup to use with a given event based on the time in the event.
Conveniently, the three steps I mentioned above are almost precisely what happens when you run a scheduled search that uses outputlookup to make a lookup file. That lookup file is then made part of the search bundle and gets distributed out to all of the indexers with a local copy.
We do something like this today to correlate VPN IP addresses with the usernames. A search runs every few minutes that pulls back VPN gateway events mapping username to IP, and then we outputlookup that to a time-bounded lookup. Then, we use said time-bounded lookup against firewall logs to be able to see which VPN user made which outbound firewall connection. Splunk ties the pieces together for us quickly and easily, without any nasty transactions or subsearches.
... View more