I created a datamodel with a single object in it, which I later accelerated.
In this datamodel I added a calculated field (calculated from _raw), which extracts a string with 300000+ characters by the use of a regular expression which I intend to process later on.
When querying the datamodel for the calculated field, I noticed that the field was cut and only about 2000 characters remained. When I de-accelerated the datamodel and waited for Splunk to delete the acceleration file, I was able to read the calculated field in its full length again.
I am not sure if I encountered a bug or if there is any limit that prevents accelerated datamodels from handling large fields, as I was not able to find any documentation that specifies such limits.
Thanks in advance for any information regarding this issue!
Data model acceleration is about performing efficient retreival of rows based on exact values and numerical comparisons.
Large 300kb fields are not good candidates for this. The only kind of acceleration I can imagine here is substring (keyword) lookups, which splunk does out of the box for event text without any configuration.
So I think this is mostly just swimming upstream and Splunk not letting you know actively.
Splunk software is full of cases where when can decide on their own how to tune, based on their needs and compute power. There oughta be a default that one can override at their own perril in limits.conf.