Here is a solution that seems to work for us to find origin of those error messages. This search should show if you have problem with too long fields. index=_internal
source="*splunkd.log"
"Received metadata string exceeding maxLength" This is due to that Splunk tries to create a indexed field that has more than 1000. Looking trough the logs its not easy to see where this come from. So with some tips from mmccul I have made this search. index=<your index>
| table
[| walklex index=<your index> type=field
| search NOT field IN (_indextime date_* punct time*pos)
| stats count by field
| table field
| mvcombine field
| return $field
]
| foreach * [| eval <<FIELD>>_Len=len(<<FIELD>>)]
| table *_Len
| stats max(*) as *
| transpose header_field=column
| rename "row 1" as count
| sort -count
| rename count as Length, column as FieldName
| head 10
| rex mode=sed field=FieldName "s/_Len$//" What it does: `walklex` will list all indexed fields for an index. Then with some commands make a list of all interesting fields and return it to the table `| foreach * [| eval <<FIELD>>_Len=len(<<FIELD>>)]` Calculate the length of each field. Then we find the the max length for each field and final get the top 10 field with largest bytes. If there are field with 1000 bytes, there are for sure some fields that are longer, so go trough the raw data and see if there are some regex that do eat to much of the line.This may not be the best/fastest way but seems to work for us.
... View more