Been getting messages saying that some identities are exceeding the field limits. I've increased the limit on some of them, but I'm having difficulty finding the exact field that is causing this issue. Is there a way to find the exact instance where this limit is being exceeded?
Sorry for the late reply, I have actually, and completely forgot to post it. Someone referred me to a search that I pasted below. You may need to adjust the ..mv_limit it looks for according to what you have configured in ES. For us it turned out the cause was from load balances responding to our vulnerability scanner with the same MAC address, causing merges. I nulled out the MACs from my search that populated the corresponding lookup with the vuln scanner info and haven't had any merge issues since.
| inputlookup asset_lookup_by_str | eval es_lookup_type="asset_lookup_by_str" | inputlookup append=t asset_lookup_by_cidr | eval es_lookup_type=coalesce(es_lookup_type, "asset_lookup_by_cidr") | inputlookup append=t identity_lookup_expanded | eval es_lookup_type=coalesce(es_lookup_type, "identity_lookup_expanded")
| rename _* AS es_lookup_*
| eval es_lookup_mv_limit=25
| foreach asset ip mac nt_host dns identity [eval count_<<FIELD>>=coalesce(mvcount(<<FIELD>>), 0), es_lookup_is_problem=mvappend(es_lookup_is_problem, if(count_<<FIELD>> >= es_lookup_mv_limit, "yes - field <<FIELD>> has over ". es_lookup_mv_limit . " entries", null()))]
| where isnotnull(es_lookup_is_problem)
| table es_lookup_*, count_*, *
I'm getting the same error messages. Can't figure out what exactly is causing them. I've tried this search (and variations of it).
| inputlookup asset_lookup_by_str | stats values(dns) dc(dns) as dc by ip | sort limit=0 -dc
Also, I think that DHCP can cause troubles with the asset lists i Splunk ES.
Check out this thread as well: https://community.splunk.com/t5/Splunk-Enterprise-Security/Assets-with-overlapping-DHCP-Addresses-Me...
Have you found any better solution than my search above?
Sorry for the late reply, I have actually, and completely forgot to post it. Someone referred me to a search that I pasted below. You may need to adjust the ..mv_limit it looks for according to what you have configured in ES. For us it turned out the cause was from load balances responding to our vulnerability scanner with the same MAC address, causing merges. I nulled out the MACs from my search that populated the corresponding lookup with the vuln scanner info and haven't had any merge issues since.
| inputlookup asset_lookup_by_str | eval es_lookup_type="asset_lookup_by_str" | inputlookup append=t asset_lookup_by_cidr | eval es_lookup_type=coalesce(es_lookup_type, "asset_lookup_by_cidr") | inputlookup append=t identity_lookup_expanded | eval es_lookup_type=coalesce(es_lookup_type, "identity_lookup_expanded")
| rename _* AS es_lookup_*
| eval es_lookup_mv_limit=25
| foreach asset ip mac nt_host dns identity [eval count_<<FIELD>>=coalesce(mvcount(<<FIELD>>), 0), es_lookup_is_problem=mvappend(es_lookup_is_problem, if(count_<<FIELD>> >= es_lookup_mv_limit, "yes - field <<FIELD>> has over ". es_lookup_mv_limit . " entries", null()))]
| where isnotnull(es_lookup_is_problem)
| table es_lookup_*, count_*, *
I've found myself coming back to this problem, and still I cannot understand how to properly troubleshoot this health alarm from Splunk ES. I think one problem with using the lookup "asset_lookup_by_str" to find mv-fields that exceeds the limits is that the mv-fields are already truncated, so it's impossible to see which fields that was actually over the limit, and which fields was on the limit but not over it. Also, out-of-the-box, some mv-fields has a limit of 25 and some has a limit of 6, so there are two different limits. I tried making a new search using the "entitymerge" command, but this also truncates the mv-fields, so I've gone back to looking at the "asset_lookup_by_str", and looking for fields that are on the limit, indicating that before the merge they could possibly have been over the limit.
This is my new version of the search. For future reference, if someone needs it. I've also removed the asset fields, as I'm unsure if it actually has a limit or not, and is a cause for the alarm.
| inputlookup asset_lookup_by_str
| eval es_lookup_mv_limit_small=6
| eval es_lookup_mv_limit_big=25
| foreach ip mac nt_host dns [eval count_<<FIELD>>=coalesce(mvcount(<<FIELD>>), 0), es_lookup_is_problem_small=mvappend(es_lookup_is_problem_small, if(count_<<FIELD>> >= es_lookup_mv_limit_small, "yes - field <<FIELD>> possibly has over ". es_lookup_mv_limit_small . " entries", null()))]
| foreach bunit category city country lat long owner pci_domain [eval count_<<FIELD>>=coalesce(mvcount(<<FIELD>>), 0), es_lookup_is_problem_big=mvappend(es_lookup_is_problem_big, if(count_<<FIELD>> >= es_lookup_mv_limit_big, "yes - field <<FIELD>> possibly has over ". es_lookup_mv_limit_big . " entries", null()))]
| where isnotnull(es_lookup_is_problem_small) OR isnotnull(es_lookup_is_problem_big)
| eval es_lookup_is_problem=mvappend(es_lookup_is_problem_small,es_lookup_is_problem_big)
| fields - es_lookup_is_problem_small es_lookup_is_problem_big
| table es_lookup_*, count_*, *