Hi,
I have a situation where I have a large dataset. This dataset has a field named A. This field is large and passing Splunk's default character limit of 10250 characters. Almost 70% of this dataset is below default character limit. ~20% of field A's character length is around 30k. Rest of the 10% is very large. And these very large ones have characters in one field value over 3million and event size about 4MB a piece!!! 😞
Is there a way where I can create a logic in eval or rex to slice the field as below:
Field A:
Event 1 Master
Event 1 Post Processing 1: Create event where characters FROM 1-10250
Event 2 Post Processing 2: Create event where characters FROM 10251-......and so on until it completes.
There is a lot of rule matching that needs to happen in the work I have to do based on requirements. This rule matching looks for specific string in the field's text and outputs specific values. This rule matching works great so far except the issue is when field's values exceeds the character limit, Splunk ignores and I can not match after that. Not only this, Splunk's auto extraction does not extract any fields AFTER it stops because of character limit for that larger values in field's text.
The dataset is ingested from MS SQL Server vis DBConnect on Splunk side. I also thought about using Python to pre-process the data but that adds complexity to the whole picture. Trying to keep it simple.
Thanks in-advance!!!