- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
How to reduce rex usage in query results in exceeding the depth limit of REGEX?

Hello,
I've been tasked to optimize a former colleague's saved searches and found that the query had a lot of rex command going at the same field and decided to compact into one REGEX
As such, i've applied the following REGEX:
From Regex101, i've had the query with a whopping 6.5k steps which is a bit too much, and i've been trying to reduce it as much as i can but i've lack knowledge in that department in order to optimize further the query.
One of the things that i want to keep only are the capture groups but the rest i want to ignore altogether. Is there a way of doing that and reducing the steps?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content


I think this is an instance where "optimizing" is not optimal. While a clever person such as yourself may be able to craft an amazing regular expression that is the equivalent of the many regexes your former coworker used, that doesn't make it better. As you're discovering, such a regex may require many more steps and more resources than the many regexes it replaces. Consider also the person who will replace you and have to maintain your creation. Will he or she be able to understand it enough to adapt it to a changing data source? In six months time, will *you* be able to understand it enough to adapt it to a changing data source?
If this reply helps you, Karma would be appreciated.
