Splunk Search

Rex has exceeded configured match_limit, consider raising the value in limits.conf.

ssh
Engager

In our log, I'd like to extract statusText and categorize it in table to see how many error response statusCode and statusText.
EX:

eventSourcestatusCodestatusText
bulkDelete10203031: No Card found with the identifier for the request

 

But my query is getting "has exceeded configured match_limit, consider raising the value in limits.conf." after using fields extraction.

 

 

 

 

index = xxx sourcetype=xxx "Publish message on SQS" | search bulkDelete | rex field=_raw "(?ms)^(?:[^:\\n]*:){7}\"(?P<error_bulkDelete>[^\"]+)(?:[^:\\n]*:){2}\"(?P<error_errorCode>[^\"]+)[^:\\n]*:\"(?P<error_desc>[^\"]+)(?:[^:\\n]*:){6}\\\\\"(?P<error_statusText>[^\\\\]+)" offset_field=_extracted_fields_bounds

 

 

 

Target log:

 

 

Publish message on SQS, queueName=xxx, retryCount=0, message={"traceId":"xxx1112233","clientContext":"xxxxxclientContext","cardTokenReferenceId":"xxxcardTokenReferenceId","eventSource":"bulkDelete","errors":[{"errorCode":"52099","errorDescription":"Feign Client Exception.","retryCategory":"RETRYABLE","errorDetails":"{\"clientContext\":\"xxxxxclientContext\",\"ewSID\":\"xxxxSID\",\"statusCode\":\"1020\",\"statusText\":\"3031: No Card found with the identifier for the request\",\"timestampISO8601\":\"2024-04-05T00:00:26Z\"}"}]}

 

 

 

I checked similar posts, they suggested to use non-greedy?

So I tried:

 

 

 

 

index = "xxx" sourcetype=xxx "Publish message on SQS*" bulkDelete | rex field=_raw "\"statusText\":\s*\"(?P<statusText>[^\"]+)\"" | where NOT LIKE( statusText, "%Success%")

 

 

 

If I add "| table", I will get blank content on statusText

Labels (4)
0 Karma
1 Solution

marnall
Motivator

Yes you could exclude successful responses by adding a filter. Assuming that all errors have an errorCode and all non-errors do not, then you could do it like this:

index = xxx sourcetype=xxx "Publish message on SQS" bulkDelete
| rex field=_raw "message=(?<message>{.*}$)"
| spath input=message
| search "errors{}.errorCode" = *
| spath input=errors{}.errorDetails
| table eventSource statusCode statusText

View solution in original post

marnall
Motivator

Yes you could exclude successful responses by adding a filter. Assuming that all errors have an errorCode and all non-errors do not, then you could do it like this:

index = xxx sourcetype=xxx "Publish message on SQS" bulkDelete
| rex field=_raw "message=(?<message>{.*}$)"
| spath input=message
| search "errors{}.errorCode" = *
| spath input=errors{}.errorDetails
| table eventSource statusCode statusText

marnall
Motivator

You could try extracting the json object after message=, then spathing it until you get the fields you would like. E.g.

index = xxx sourcetype=xxx "Publish message on SQS" bulkDelete
| rex field=_raw "message=(?<message>{.*}$)"
| spath input=message
| spath input=errors{}.errorDetails
| table eventSource statusCode statusText

 

ssh
Engager

Thanks, it looks like contain successful response, can we exclude it?

 

Publish message on SQS, queueName=xxx, retryCount=0, message={"traceId":"xxxtraceId","clientContext":"xxxclientContext","cardTokenReferenceId":"xxxCardTokenReferenceId","eventSource":"bulkDelete","walletWebResponse":{"clientContext":"xxxclientContext","ewSID":"xxxSID,"timestampISO8601":"2024-04-05T00:00:14Z","statusCode":"0","statusText":"Success"}}

Screenshot 2024-04-16 at 2.40.42 PM.png

 

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Can’t Make It to Boston? Stream .conf25 and Learn with Haya Husain

Boston may be buzzing this September with Splunk University and .conf25, but you don’t have to pack a bag to ...

Splunk Lantern’s Guide to The Most Popular .conf25 Sessions

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Unlock What’s Next: The Splunk Cloud Platform at .conf25

In just a few days, Boston will be buzzing as the Splunk team and thousands of community members come together ...