Dear Splunk community,
I created a datamodel with a single object in it, which I later accelerated.
In this datamodel I added a calculated field (calculated from _raw), which extracts a string with 300000+ characters by the use of a regular expression which I intend to process later on.
When querying the datamodel for the calculated field, I noticed that the field was cut and only about 2000 characters remained. When I de-accelerated the datamodel and waited for Splunk to delete the acceleration file, I was able to read the calculated field in its full length again.
I am not sure if I encountered a bug or if there is any limit that prevents accelerated datamodels from handling large fields, as I was not able to find any documentation that specifies such limits.
Thanks in advance for any information regarding this issue!