sideview's recommendations for tweaking the base search are also very good. Doing the transaction part with with stats is possible but its very gnarly (I've done it but it was confusing and time consuming, and involves the use of multi-value fields and the use of first() and last(), etc). See more on that below.
The first thing I would check after optimizing the base search as sideview mentioned is ensuring that my regular expressions (field extractions) aren't performing poorly for non-buildin fields such as EventCode and ComputerName. If those are extracted by Splunk implicitly (if they are in the _raw events as "EventCode=xxx ComputerName=yyy", then skip my next paragraph and proceed to transaction info, otherwise continue on.
Try running the base search as recommended by dwaddel, again. After the search completes, click on the Job Inspector icon (it looks like an "i") near where you have the option to save, pause, etc. In the job inspector, you should see a summary of how long each portion of the search took. If you see command.search.kv taking a lot of the time used for the search (say, greater than 30s out of 3 mins), it means that the regular expression that is being used to extract the EventCode field is fairly slow. Try downloading "Regex Coach" from the internet, take a sample event from splunk, and the field extraction for EventCode. You'll need to strip out the extra syntax contained in the parenthesis for naming the group (should look like "?<EventCode>". Then single-step through the regex to see if it is doing a lot of backing up and moving forward. Read about REGEX optimizations and apply anything you can to make it do less moving around. Paste the new REGEX back into your field extraction (adding the group name back in), and test that it works and shows up in 100% of your results (if the old one did). You should also do this for the ComputerName field, since it will be required for the timechart/stats portion of the search.
As for the transaction part, that is going to be much tougher because you have two distinct field values that are important for calculating the duration, and they must occur in a set order (otherwise I'd say take a look at this for more info about that). Transaction processing time goes up exponentially based on the number of events it has to deal with at once. This is in also line with Lamar's recommendation of minimizing the events that are found in the base search as much as possible.
Also note that the transaction command is single threaded (actually a lot of splunk searching is), so you are limited to the speed of one CPU. You may be able to get better results by segmenting your base search into smaller partitions based on a list of ComputerName's, and running more transaction commands in parallel so that the searches are distributed across CPUs (if you have a multi-core system -- I haven't tried this, but it should work).
If there are many cases (say, greater than 10%) where you receive EventCode 4624 but not an accompanying 0 (or vice versa), then an option would be to filter off events for hosts that don't have both of those EventCodes, prior to the transaction command (since their transaction duration will always 0 and can easily be identified with a stats command). Use a subsearch before the first pipe, like this:
EventCode=0 OR EventCode=4624
[search EventCode=0 OR EventCode=4624
| stats dc(EventCode) AS CodeCount by ComputerName
| search CodeCount>1
| table ComputerName ]
| ... your transaction command etc
Of course, if the field extraction time for EventCode and ComputerName is slow, then this will add more time up front, but if they are fast, and many events are eliminated, transaction could speed up a great deal.
... View more