Monitoring Splunk

Received metadata string exceeding maxLength warning- How do I resolve?

jotne
Builder

I do get lots of these message from my index servere on a Splunk Enterprice solution.

 

05-06-2021 12:18:08.218 +0200 WARN  MetaData::string - Received metadata string exceeding maxLength -- it will be truncated as an indexed extraction. Only searches scanning _raw will be able to find it as a whole word.

 

 source: /opt/splunk/var/log/splunk/splunkd.log

splunk_server: all my index servere

Google does not give me anything.

 

Edit turned on metadata::string debugging and did get more detailed info:

 

Received metadata string exceeding maxLength length=1642 maxLength=1000

 

But are still not able to find where to change maxLength for meta data.

 

Labels (1)

jotne
Builder

Here is  a solution that seems to work for us to find origin of those error messages.
This search should show if you have problem with too long fields.

 

index=_internal
source="*splunkd.log"
"Received metadata string exceeding maxLength"

 


This is due to that Splunk tries to create a indexed field that has more than 1000.  Looking trough the logs its not easy to see where this come from.  So with some tips from mmccul I have made this search.

 

index=<your index>
| table
  [| walklex index=<your index> type=field
  | search NOT field IN (_indextime date_* punct time*pos)
  | stats count by field
  | table field
  | mvcombine field
  | return $field
]
| foreach * [| eval <<FIELD>>_Len=len(<<FIELD>>)]
| table *_Len
| stats max(*) as *
| transpose header_field=column
| rename "row 1" as count
| sort -count
| rename count as Length, column as FieldName
| head 10
| rex mode=sed field=FieldName "s/_Len$//"

 

What it does:
`walklex` will list all indexed fields for an index.  Then with some commands make a list of all interesting fields and return it to the table
`| foreach * [| eval <<FIELD>>_Len=len(<<FIELD>>)]` Calculate the length of each field.
Then we find the the max length for each field and final get the top 10 field with largest bytes.
If there are field with 1000 bytes, there are for sure some fields that are longer, so go trough the raw data and see if there are some regex that do eat to much of the line.This may not be the best/fastest way but seems to work for us.

0 Karma

VatsalJagani
SplunkTrust
SplunkTrust

This could possibly be due to a very long source name.

0 Karma

somesoni2
Revered Legend

It seems some of the indexed field extractions you're doing is generating values which are quite large (more than 1000 bytes as per the debug message). This seems like a hard-coded limit of Splunk on metadata (indexed field) length. Honestly, you save key fields as indexed field so that you can search faster, and having a very large string as key field may not be efficient. 

0 Karma

cald4
Engager

Bumping this, does anyone have any ideas?

Tags (1)
0 Karma
Get Updates on the Splunk Community!

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...