I've noticed pretty much the same behavior but my deployment is a production deployment with clusters and 5+ indexers and massive amount of data. 1 minute for you is sometimes 20 minutes for me.
I don't know the technical specification for this, but when Splunk says "Eventtypes and tags run at search time" it refers that when you run your search request it will look for the rules that apply for your particular search and then perform them. My primary suspect is that splunk uses a more static than dynamic way to store this rules so that they are available as soon as anyone needs them, and the time it takes to update them based on changes to the splunk UI are related to the availability of both the cached set of rules to apply to searches and the memory/cpu resources in the deployment.
So let me explain why I think this, because if the job manager is running constantly and overloading the machines and using constantly the rules, it would be hard to splunk to say "okay, now is the right time to alter the rules without impacting other Jobs.
I recall a time when I updated a lookup by removing the old one and uploading the new one.. and the users reported 15~ minutes later that it wasn't finding the lookup, but it was there, and permissions were correctly assigned, just the system didn't updated itself with the new lookup reference during that time.
So that's my educated opinion on this, maybe if I get to ask an splunk technician from Splunk I would definitely ask this kind of questions on how they manage internally the availability of the search time rules.