Splunk Search

How do I optimize the query to avoid forceful termination ?

zacksoft_wf
Contributor

My query after finalizing for some time , gives me, 
The search processs with sid= was forcefully terminated because its physical memory usage  has exceeded the 'search_process_memory_usage_threshold'  setting in limits.conf.

I am not allowed to increase memory...
Any suggestion how to tweak the query  to avoid forceful termination?

=================
(index=bsa) sourcetype=wf:esetext:user_banks:db OR sourcetype=wf:esetext:soc_sare_data:db au!="0*"
| stats values(bank_name) as bank_name
, values(bank_type) as type
, values(pwd_expires) as pwd_expires
, values(is_interactive) as is_interactive
, values(au_owner_name) as au_owner_name
, values(au_owner_email) as au_owner_email
, values(service_bank_name) as service_bank_name
, values(owner_elid) as owner_elid,
, values(manager_name) as manager_name
BY au
| eval bank_name=coalesce(bank_name,service_bank_name)
| eval user=lower(bank_name)
| dedup user
| rex field=user "[^:]+:(?<user>[^\s]+)"
| fields - bank_name
| stats
values(au_owner_email) as au_owner_email
, values(au_owner_name) as au_owner_name
, values(owner_elid) as owner_elid
, max(manager_name) as manager_name
BY user
,service_bank_name
,type
,pwd_expires
,is_interactive



Labels (1)
0 Karma

johnhuang
Motivator

It's not clear whether you need to run stats twice. Try to consolidate if possible.

Could be typos here, but you get the idea:

| fields bank_name bank_type pwd_expires is_interactive au_owner_name au_owner_email service_bank_name owner_elid manager_name au
| eval user=LOWER(COALESCE(bank_name,service_bank_name))
| rex field=user "[^:]+:(?<user>[^\s]+)"
| rename bank_type AS type
| stats values(au_owner_email) AS au_owner_email, values(au_owner_name) AS au_owner_name, values(owner_elid) AS owner_elid, max(manager_name) AS manager_name last(is_interactive) AS is_interactive last(pwd_expire) AS pwd_expire last(service_bank_name) AS service_bank_name last(type) AS type BY user 
0 Karma

zacksoft_wf
Contributor

@johnhuang  The first STATS was to perform a join between two sourcetypes by common field "au".  If I don't do that I am not able to get values from both the sources,  example bank_type is from second source type and is_interactive is from first. 
Also I get some multivalues, so to flatten them out I use them in the second stats's BY clause .

0 Karma

johnhuang
Motivator

If those missing values are defined by au, you can throw in an eventstats to have it filled.

| fields bank_name bank_type pwd_expires is_interactive au_owner_name au_owner_email service_bank_name owner_elid manager_name au
| eventstats max(is_interactive) AS is_interactive max(bank_type) AS bank_type BY au
| eval user=LOWER(COALESCE(bank_name,service_bank_name))
| rex field=user "[^:]+:(?<user>[^\s]+)"
| rename bank_type AS type
| stats values(au_owner_email) AS au_owner_email, values(au_owner_name) AS au_owner_name, values(owner_elid) AS owner_elid, max(manager_name) AS manager_name max(is_interactive) AS is_interactive last(pwd_expire) AS pwd_expire max(service_bank_name) AS service_bank_name last(type) AS type BY user 

 

0 Karma

johnhuang
Motivator

Try max instead of last for those 2 fields. Since I can't see your data, I'm just making a few guesses and assumptions on how to structure the query. You need to play around with it -- I'm sure you'll be able to get rid of one of the stats.

| fields bank_name bank_type pwd_expires is_interactive au_owner_name au_owner_email service_bank_name owner_elid manager_name au
| eval user=LOWER(COALESCE(bank_name,service_bank_name))
| rex field=user "[^:]+:(?<user>[^\s]+)"
| rename bank_type AS type
| stats values(au_owner_email) AS au_owner_email, values(au_owner_name) AS au_owner_name, values(owner_elid) AS owner_elid, max(manager_name) AS manager_name max(is_interactive) AS is_interactive last(pwd_expire) AS pwd_expire max(service_bank_name) AS service_bank_name last(type) AS type BY use

 

0 Karma

richgalloway
SplunkTrust
SplunkTrust

The values function is known to consume a lot of memory so use it carefully.

Consider coalescing bank_name and service_bank_name before the first stats command.

That said, the number of events being processed can also affect how much memory is used.  Try shrinking the time period to reduce the number of events.

---
If this reply helps you, Karma would be appreciated.
0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...